Handling 1/2^n when determining big-O runtime? - algorithm

I have to find the big-O Notation of the following expression:
2n + n(logn)10 + (1/2)n
If I ignore the coefficients, I get 2n + n (log n)10 plus some term involving 1/2. If I ignore the coefficients, I completely lose the last term, but it doesn't seem right to include them.
How should I handle the (1/2)n term?

For large n, (1/2)n approaches 0 and becomes negligible. Also, 2n eventually becomes negligible compared to n(logn)10, since the latter grows faster.
Comparing n(logn)10 to 2n is equivalent to comparing (logn)10 to 2 (since both contain a factor of n). Clearly, (logn)10 will surpass 2 for large enough ns -- actually all it takes is an n of 3. As n grows further, the difference between these two terms will increase as well and the significance of the 2n term will become smaller and smaller.
Therefore, the big O expression we're left with is
O(n(logn)10)

Think about what happens to (1/2)n as n gets large. This term gets smaller and smaller and smaller and eventually becomes completely negligible. (In fact, if you pick n = 30, it's smaller than 1 / 1,000,000,000.) One useful observation is that (1/2)n) is never greater than 1/2, so you could note that
2n + n(log n)10 + (1/2)n ≤ 2n + n(log n)10 + 1/2
From there, you can see that this is O(n (log n)10), since the n (log n)10 term grows faster than the 2n term.
Normally, though, you have to be careful with exponentials. Anything of the form an for a > 1 will grow faster than any polynomial, so normally you would drop the polynomials and leave the exponential. Here, you do the opposite.
Hope this helps!

Related

How to handle Big O when one variable is known to be smaller than another one?

We have 4 algorithms, all of them with complexity depending on m and n, like:
Alg1: O(m+n)
Alg2: O(mlogm + nlogn)
Alg3: O(mlogn + nlogm)
Alg4: O(m+n!) (ouch, this one sucks, but whatever)
Now, how do we handle this if we now that n>m? My first thought is: Big O notation "discard" constant and smaller variables because it doesn't matter when, but sooner or later the "bigger term" will overwhelm all the others, making them irrelevant in the computation cost.
So, can we rewrite Alg1 as O(n), or Alg2 as O(mlogm)? If so, what about the others?
yes you can rewrite it if you know that it is always the case that n>m. Formally, have a look at this
if we know that n>m (always) then it follows that
O(m+n) < O(n+n) which is O(2n) = O(n) (we don't really care about the 2)
also we can say the same thing about the other algorithms as well
O(mlogm + nlogn) < O(nlogn + nlogn) = O(2nlogn) = O(nlogn)
I think you can see where the rest of them are going. But if you do not know that n > m then you cannot say the above.
EDIT: as #rici nicely pointed out, you also need to be careful as well, since it'll always depend on the given function. (Note that O((2n)!) can not be simplified to O(n!))
With a bit of playing around you can see how this is not true
(2n)! = (2n) * (2n-1) * (2n-2)... < 2(n) * 2(n-1) * 2(n-2) ...
=> (2n)! = (2n) * (2n-1) * (2n-2)... < 2^n * n! (After combining all of the 2 coefficients)
Thus we can see that O((2n)!) is more like O(2^n * n!) to get a more accurate calculation you can see this thread here Are the two complexities O((2n + 1)!) and O(n!) equal?
Consider the problem of finding the k largest elements of an array. There's a nice O(n log k)-time algorithm for solving this using only O(k) space that works by maintaining a binary heap of at most k elements. In this case, since k is guaranteed to be no bigger than n, we could have rewritten these bounds as O(n log n) time with O(n) memory, but that ends up being less precise. The direct dependency of the runtime and memory usage on k makes clear that this algorithm takes more time and uses more memory as k changes.
Similarly, consider many standard graph algorithms, like Dijkstra's algorithm. If you implement Dijkstra's algorithm using a Fibonacci heap, the runtime works out O(m + n log n), where m is the number of nodes and n is the number of edges. If you assume your input graph is connected, then the runtime also happens to be O(m + m log m), but that's a less precise bound than the one that we had.
On the other hand, if you implement Dijkstra's algorithm with a binary heap, then the runtime works out to O(m log n + n log n). In this case (again, assuming the graph is connected), the m log n term strictly dominates the n log n term, and so rewriting this as O(m log n) doesn't lose any precision.
Generally speaking, you'll want to give the most precise bounds that you can in the course of documenting the runtime and memory usage of a piece of code. If you have multiple terms where one clearly strictly dominates the other, you can safely discard those lower terms. But otherwise, I wouldn't recommend discarding one of the variables, since that loses precision in the bound you're giving.

Order assertions according to order of growth?

So, I have a problem I'm supposed to do in which I am supposed to put the following assertions in order according to order of growth
2nlogn
0.000001n^3
1983n^2 + n
n + 456
3^n + 5n
and I don't know how to go about doing this. I'm supposed to organize them in a fashion like 2nlogn <= n + 456 <= etc....
I'm not looking for the answer- I just need some advice on how exactly to do this. I know the order of growth table but it doesn't help me here.
If you know the order of growth table, that's mostly it!
The other big thing to know is that coefficients and lower order terms don't matter. For example,
3n^2 ~ n^2 > 1000n ~ n
If that's difficult to understand, imagine what happens as n becomes extremely large. n^2 grows quickly as n increases, while 1000n only increases by 1000 each time n increases by 1. By the time n=1000, then n^2 = 1000n, and as n increases even more, n^2 overtakes 1000n.
So, for any function like n^2 + 1000n, split it into different terms that are added together (multiplication is different, e.g. n*log(n) is different from n or log(n)). you can ignore everything but the largest term, in this case n^2. Once you do this and strip away coefficients, just use the order of growth table to get your solution!
https://en.wikipedia.org/wiki/Big_O_notation#Orders_of_common_functions

Omitting lowest growing term from Big O notation

I am currently learning about big O notation but there is a concept that's confusing me. If for 8N^2 + 4N + 3 the complexity class would be N^2 because this is the fastest growing term. And for 5N the complexity class is N.
Then is it correct to say that of NLogN the complexity class is N since N grows faster than LogN?
The problem I'm trying to solve is that if configuration A consists of a fast algorithm that takes 5NLogN operations to sort a list on a computer that runs 10^6 operations per seconds and configuration B consists of a slow algorithm that takes N**2 operations to sort a list and is run on a computer that runs 10^9 operations per second. for smaller arrays
configuration 1 is faster, but for larger arrays configuration 2 is better. For what size of array does this transition occur?
What I thought was if I equated expressions for the time it took to solve the problem then I could get an N for the transition point however that yielded the equation N^2/10^9 = 5NLogN/10^6 which simplifies to N/5000 = LogN which is not solvable.
Thank you
In mathematics, the definition of f = O(g) for two real-valued functions defined on the reals, is that f(n)/g(n) is bounded when n approaches infinity. In other words, there exists a constant A, such that for all n, f(n)/g(n) < A.
In your first example, (8n^2 + 4n + 3)/n^2 = 8 + 4/n + 3/n^2 which is bounded when n approaches infinity (by 15, for example), so 8n^2 + 4n + 3 is O(n^2). On the other hand, nlog(n)/n = log(n) which approaches infinity when n approaches infinity, so nlog(n) is not O(n). It is however O(n^2), because nlog(n)/n^2 = log(n)/n which is bounded (it approches zero near infinity).
As to your actual problem, remember that if you can't solve an equation symbolically you can always resolve it numerically. The existence of solutions is clear.
Let's suppose that the base of your logarithm is b, so we are to compare
5N * log(b, N)
with
N^2
5N * log(b, N) = log(b, N^(5N))
N^2 = N^2 * log(b, b) = log(b, b^(N^2))
So we compare
N ^ (5N) with b^(N^2)
Let's compare them and analyze the relative value of (N^5N) / (b^(N^2)) compared to 1. You will observe that after a sertain limit it is smaller than 1.
Q: is it correct to say that of NLogN the complexity class is N?
A: No, here is why we can ignore smaller terms:
Consider N^2 + 1000000 N
For small values of N, the second term is the only one which matters, but as N grows, that does not matter. Consider the ratio 1000000N / N^2, which shows the relative size of the two terms. Reduce to 10000000/N, which approaches zero as N approaches infinity. Therefore the second term has less and less importance as N grows, literally approaching zero.
It is not just "smaller," it is irrelevant for sufficiently large N.
That is not true for multiplicands. n log n is always significantly bigger than n, by a margin that continues to increase.
Then is it correct to say that of NLogN the complexity class is N
since N grows faster than LogN?
Nop, because N and log(N) are multiplied and log(N) isn't constant.
N/5000 = LogN
Roughly 55.000
Then is it correct to say that of NLogN the complexity class is N
since N grows faster than LogN?
No, when you omit you should omit a TERM. When you have NLgN it is, as a whole, called a term. As of what you're suggesting then: NNN = (N^2)*N. And since N^2 has bigger growth rate we omit N. Which is completely WRONG. The order is N^3 not N^2. And NLgN works in the same manner. You only omit when the term is added/subtracted.
For example, NLgN + N = NLgN because it has faster growth than N.
The problem I'm trying to solve is that if configuration A consists of
a fast algorithm that takes 5NLogN operations to sort a list on a
computer that runs 10^6 operations per seconds and configuration B
consists of a slow algorithm that takes N**2 operations to sort a list
and is run on a computer that runs 10^9 operations per second. for
smaller arrays configuration 1 is faster, but for larger arrays
configuration 2 is better. For what size of array does this transition
occur?
This CANNOT be true. It is the absolute OPPOSITE. For small N values the faster computer with N^2 is better. For very large N the slower computer with NLgN is better.
Where is the point? Well, the second computer is 1000 times faster than the first one. So they will be equal in speed when N^2 = 1000NLgN which solves to N~=14,500. So for N<14,500 then N^2 will go faster (since the computer is 1000 times faster) but for N>14,500 the slower computer will be much faster. Now imagine N=1,000,000. The faster computer will need 50 times more than what the slower computer needs because N^2 = 50,000 NLgN and it is 1000 times faster.
Note: the calculations were made using the Big O where constant factors are omitted. And the logarithm used is of the base 2. In algorithms complexity analysis we usually use LgN not LogN where LgN is log N to the base 2 and LogN is log N to the base 10.
However, referring to CLRS (good book, I recommend reading it) the Big O defines as:
Take a look at this graph for better understanding:
It is all about N > No. So all the rules of the Big O notation are valid FOR BIG VALUES OF N. For small N it is NOT necessarily correct. I mean, for N=5 it is not necessary that the Big O will give a close approximation on the running time.
I hope this gives a good answer for the question.
Reference: Chapter3, Section1, [CLRS] Introduction To Algorithms, 3rd Edition.

Big-O Algebra Simplification Issue

I've been working on a problem for several hours now, and I need clarification:
I needed to simplify (as much as possible) the following big-O expressions. For each, I put down what I thought was the correct answer. I would like solutions, but I would appreciate an explanation as well if I am incorrect. I am trying to learn Big O notation as well as possible, and I think doing these problems helped a lot. I just want to make sure I'm on the right path.
a) O(sqrt(n) + log(n)*log(n))
I thought this was O(n)
b) O(3log2 n + 2log3 n)
I thought this was O(log3 (n))
c) O(n^3 + 2n^2 +3n + 4)
I thought this was O(n^3)
Thanks for all your help!
Let's go through this one at a time.
O(sqrt(n) + log(n)*log(n)). I thought this was O(n)
You are correct that this is O(n), but that's not a particularly tight bound. Let's start with a simplifying question: which grows faster, O(sqrt(n)) or O(log(n) * log(n))? Using that information, can you drop one of the two terms from the summation?
O(3log2 n + 2log3 n). I thought this was O(log3 (n))
Remember that "big-O ignores the base of logarithms" (that is, logb n = O(logc n) for any b and c that are greater than one). You're technically right that it's O(log3 n), but that's not the cleanest solution. You'd be better off saying O(log n) here.
O(n^3 + 2n^2 +3n + 4). I thought this was O(n^3)
Exactly right! This works because 2n2 + 3n + 4 is O(n3), so you can drop those terms from the summation. Now, can you use a similar trick to simplify your answer to part (a)?
Hope this helps!
Ok the answer is long but I was pretty throughout.
Intro:
1st thing you need to do is to properly define what you mean by big O. Relevant read. Traditionally it's defined only as upper bound. But it's not very useful in computer science, at least not for task such as yours. You could technically answer with anything growing faster than example i.e. saying O(n!) for all the questions would technically be ok.
More useful is big theta, and usually in CS I saw big O redefined to the meaning of big Theta from the read above. The difference is that your bound, must be tighter and also apply from below.
Definitions/Rules: My favourite method to calculate Big O (and Theta) is using limits. It allows to sum asymptotic behaviour relations in a simple and straight forward manner.
Basically if (x->inf is implied here and thereafter):
lim f(x) / g(x) = infinity - f asymptotically grows bigger than g
lim f(x) / g(x) is a constant > 0 - f asymptotically grows the same as g
lim f(x) / g(x) = 0 - f asymptotically grows slower than g
Number 2. is big Theta. Number 2. and 3. combined are traditional Big O as in "f belongs to O(g)" (or "is O(g)" which is somewhat confusing wording). It means that f will not outgrow g so g is its upper bound.
Now with a little math is pretty easy to prove that Big O (or Theta) will care only about the fastest growing term. This comes straight from limit properties.
I will use O as big Theta from now on because everything holds for Big O too as it is looser.
Explanation of examples:
Your 3rd example is the easiest. You can safely drop 2n^2 +3n + 4 because n^3 is growing faster. You can prove that n^3 + 2n^2 +3n + 4 is O(n^3) it by calculating lim n^3 / (n^3 + 2n^2 +3n + 4).
Same goes for your 2nd exaple, but you need to go through logarithm properties. Basically:
log b1 (x) = c log b2 (x) - it means you can switch the base of logarithm at the expense of a constant... and from above rules definition a constant factor does not change anything, it's still 2. just the constant changes.
Your 1st example is hardest/trickiest, because the limit is most complicated. However, O(f+g) is either O(f) or O(g), because either one grows faster, so the other can be dropped or they asymptotically grow the same so either one can be chosen (their fastest growing term will be the same anyways). This means you need to check which one is growing faster, you do this by ... calculating lim sqrt(n)/(log(n)*log(n)) and choosing according to rules from above. I think this one needs d'Hospital rule.
(a) is the toughest one there I think; (b) and (c) use fairly common rules for Big-Oh simplification.
For (a), I suggest making a substition: let m = [some function of n that makes one of the two terms simpler] and rearrange to get n = [something]. You can then use this to substitute m into the expression, thereby getting rid of all appearances of n, and simplify it according to Big-Oh rules. Then, provided that the function you picked is an increasing function of n, you can substitute n back in and simplify further if need be.

Big O proof with sqrt and log

I am having trouble figuring out how to prove that
t(n) = sqrt(31n + 12n log n + 57)
is
O(sqrt(n) log n)
I haven't had to deal with square root's in big O notation yet so I am having lots of trouble with this! Any help is greatly appreciated :)
Big O notation is about how algorithm characteristics (clock time taken, memory use, processing time) grow with the size of the problem.
Constant factors get discarded because they don't affect how the value scales.
Minor terms also get discarded because they end up having next to no effect.
So your original equation
sqrt(31n + 12nlogn + 57)
immediately simplifies to
sqrt(n log n)
Square roots distribute, like other kinds of multiplication and division, so this can be straightforwardedly converted to:
sqrt(n) sqrt(log n)
Since logs convert multiplication into addition (this is why slide rules work), this becomes:
sqrt(n) log (n/2)
Again, we discard constants, because we're interested in the class of behaviour
sqrt(n) log n
And, we have the answer.
Update
As has been correctly pointed out,
sqrt(n) sqrt(log n)
does not become
sqrt(n) log (n/2)
So the end of my derivation is wrong.
Start by finding the largest-degree factor inside of the sqrt(), which would be 12nlogn. The largest-degree factor makes all the other factors irrelevant in big O terms, so it becomes O(sqrt(12nlogn)). A constant factor is also irrelevant, so it becomes O(sqrt(nlogn)). Then I suppose you can make the argument this is equal to O(sqrt(n) * sqrt(logn)), or O(sqrt(n) * log(n)^(1/2)), and eliminate the power on logn to get O(sqrt(n)logn). But I don't know what the technical justification would be for that last step, because if you can turn sqrt(logn) into logn, why can't you turn sqrt(n) into n?
Hint: Consider the leading terms of the expansion of sqrt(1 + x) for |x| < 1.

Resources