In Big O notation, why is 8^n of O(2^n)? - big-o

I can't find why the complexity of 8^n is 2^n. More specifically, the question given is to find the complexity of 1/3*8^n-10.

I will attempt to give the rough guideline of how to prove your statment is incorrect
lets assume that your are right and
$8^n = O(2^n) $
then by definition there exist N (natural number) and M (constant real number, bigger than 0) such that for all n> N :
$ 8^n < M * 2^n $
from here we can conclude that
$ M > (8^n)/(2^n) = (8/2)^n = 4^n$
in contradiction to the face that M is a constant, so 8^n is not O(2^n)
you can also show it by using the lim sup definition

Related

How are the following equivalent to O(N)

I am reading an example where the following are equivalent to O(N):
O(N + P), where P < N/2
O(N + log N)
Can someone explain in laymen terms how it is that the two examples above are the same thing as O(N)?
We always take the greater one in case of addition.
In both the cases N is bigger than the other part.
In first case P < N/2 < N
In second case log N < N
Hence the complexity is O(N) in both the cases.
Let f and g be two functions defined on some subset of the real numbers. One writes
f(x) = O(g(x)) as x -> infinite
if and only if there is a positive constant M such that for all sufficiently large values of x, the absolute value of f(x) is at most M multiplied by the absolute value of g(x). That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0 such that
|f(x)| <= M |g(x)| for all x > x0
So in your case 1:
f(N) = N + P <= N + N/2
We could set M = 2 Then:
|f(N)| <= 3/2|N| <= 2|N| (N0 could any number)
So:
N+p = O(N)
In your second case, we could also set M=2 and N0=1 to satify that:
|N + logN| <= 2 |N| for N > 1
Big O notation usually only provides an upper bound on the growth rate of the function, wiki. Meaning for your both cases, as P < N and logN < N. So that O(N + P) = O(2N) = O(N), The same to O(N + log N) = O(2N) = O(N). Hope that can answer your question.
For the sake of understanding you can assume that O(n) represents that the complexity is of the order of n and also that O notation represents the upper bound(or the complexity in worst case). So, when I say that O(n+p) it represents that the order of n+p.
Let's assume that in worst case p = n/2, then what would be order of n+n/2? It would still be O(n), that is, linear because constants do form a part of the Big-O notation.
Similary, for O(n+logn) because logn can never be greater than n. So, overall complexity turns out to be linear.
In short
If N is a function and C is a constant:
O(N+N/2):
If C=2, then for any N>1 :
(C=2)*N > N+N/2,
2*N>3*N/2,
2> 3/2 (true)
O(N+logN):
If C=2, then for any N>2 :
(C=2)*N > N+logN,
2*N > N+logN,
2>(N+logN)/N,
2> 1 + logN/N (limit logN/N is 0),
2>1+0 (true)
Counterexample O(N^2):
No C exists such that C*N > N^2 :
C > N^2/N,
C>N (contradiction).
Boring mathematical part
I think the source of confusion is that equals sign in O(f(x))=O(N) does not mean equality! Usually if x=y then y=x. However consider O(x)=O(x^2) which is true, but reverse is false: O(x^2) != O(x)!
O(f(x)) is an upper bound of how fast a function is growing.
Upper bound is not an exact value.
If g(x)=x is an upper bound for some function f(x), then function 2*g(x) (and in general anything growing faster than g(x)) is also an upper bound for f(x).
The formal definition is: for function f(x) to be bound by some other function g(x) if you chose any constant C then, starting from some x_0 g(x) is always greater than f(x).
f(x)=N+N/2 is the same as 3*N/2=1.5*N. If we take g(x)=N and our constant C=2 then 2*g(x)=2*N is growing faster than 1.5*N:
If C=2 and x_0=1 then for any n>(x_0=1) 2*N > 1.5*N.
same applies to N+log(N):
C*N>N+log(N)
C>(N+logN)/N
C>1+log(N)/N
...take n_0=2
C>1+1/2
C>3/2=1.5
use C=2: 2*N>N+log(N) for any N>(n_0=2),
e.g.
2*3>3+log(3), 6>3+1.58=4.68
...
2*100>100+log(100), 200>100+6.64
...
Now interesting part is: no such constant exist for N & N^2. E.g. N squared grows faster than N:
C*N > N^2
C > N^2/N
C > N
obviously no single constant exists which is greater than a variable. Imagine such a constant exists C=C_0. Then starting from N=C_0+1 function N is greater than constant C, therefore such constant does not exist.
Why is this useful in computer science?
In most cases calculating exact algorithm time or space does not make sense as it would depend on hardware speed, language overhead, algorithm implementation details and many other factors.
Big O notation provides means to estimate which algorithm is better independently from real world complications. It's easy to see that O(N) is better than O(N^2) starting from some n_0 no matter which constants are there in front of two functions.
Another benefit is ability to estimate algorithm complexity by just glancing at program and using Big O properties:
for x in range(N):
sub-calc with O(C)
has complexity of O(N) and
for x in range(N):
sub-calc with O(C_0)
sub-calc with O(C_1)
still has complexity of O(N) because of "multiplication by constant rule".
for x in range(N):
sub-calc with O(N)
has complexity of O(N*N)=O(N^2) by "product rule".
for x in range(N):
sub-calc with O(C_0)
for y in range(N):
sub-calc with O(C_1)
has complexity of O(N+N)=O(2*N)=O(N) by "definition (just take C=2*C_original)".
for x in range(N):
sub-calc with O(C)
for x in range(N):
sub-calc with O(N)
has complexity of O(N^2) because "the fastest growing term determines O(f(x)) if f(x) is a sum of other functions" (see explanation in the mathematical section).
Final words
There is much more to Big-O than I can write here! For example in some real world applications and algorithms beneficial n_0 might be so big that an algorithm with worse complexity works faster on real data.
CPU cache might introduce unexpected hidden factor into otherwise asymptotically good algorithm.
Etc...

Efficient approximation of nth term without losing accuracy

Probelem Given a recurrence relation for gn as
g0 = c where is a contant double.
gn = f( gn-1 ) , where f is a linear function
then find the value of another recurrence given by
hn = gn/exp(n)
constraints: 1 <= n <= 10^9
Mathematically, the value of g(n) can be calculated in log(n) time and then h(n) can be calculated very easily but the problem is the overflow of doubles data types. So the above strategy works only for n around 1000 but not for bigger n. Note that value of h(n) can be well within the range of doubles
The actual problem is that we are trying to calculate h(n) from g(n). My question is that is there any good method to caculate h(n) directly without overflowing doubles.
G0=c
G1=ac+b
G2=a²c+ab+b
G3=a³c+a²b+ab+b
...
Gn=a^nc+b(a^n-1)/(a-1)
Then
Hn = (a/e)^nc+b((a/e)^n-1/e^n)/(a-1) ~ (a/e)^n (c + b/(a-1))

Big O notation for the complexity function of the fourth root of n

I am expected to find the Big O notation for the following complexity function: f(n) = n^(1/4).
I have come up with a few possible answers.
The more accurate answer would seem to be O(n^1/4). However, since it contains a root, it isn't a polynomial, and I've never seen this n'th rooted n in any textbook or online resource.
Using the mathematical definition, I can try to define an upper-bound function with a specified n limit. I tried plotting n^(1/4) in red with log2 n in blue and n in green.
The log2 n curve intersects with n^(1/4) at n=2.361 while n intersects with n^(1/4) at n=1.
Given the formal mathematical definition, we can come up with two additional Big O notations with different limits.
The following shows that O(n) works for n > 1.
f(n) is O(g(n))
Find c and n0 so that
n^(1/4) ≤ cn
where c > 0 and n ≥ n0
C = 1 and n0 = 1
f(n) is O(n) for n > 1
This one shows that O(log2 n) works for n > 3.
f(n) is O(g(n))
Find c and n0 so that
n^(1/4) ≤ clog2 n
where c > 0 and n ≥ n0
C = 1 and n0 = 3
f(n) is O(log2 n) for n > 3
Which Big O description of the complexity function would be typically used? Are all 3 "correct"? Is it up to interpretation?
Using O(n^1/4) is perfectly fine for big O notation. Here are some examples of fractures in exponents from real life examples
O(n) is also correct (because big O giving only upper bound), but it is not tight, so n^1/4 is in O(n), but not in Theta(n)
n^1/4 is NOT in O(log(n)) (proof guidelines follows).
For any value r>0, and for large enough value of n, log(n) < n^r.
Proof:
Have a look on log(log(n)) and r*log(n). The first is clearly smaller than the second for large enough values. In big O notation terminology, we can definetly say that the r*log(n)) is NOT in O(log(log(n)), and log(log(n)) is(1), so we can say that:
log(log(n)) < r*log(n) = log(n^r) for large enough values of n
Now, exponent each side with base of e. Note that both left hand and right hand values are positives for large enough n:
e^log(log(n)) < e^log(n^r)
log(n) < n^r
Moreover, with similar way, we can show that for any constant c, and for large enough values of n:
c*log(n) < n^r
So, by definition it means n^r is NOT in O(log(n)), and your specific case: n^0.25 is NOT in O(log(n)).
Footnotes:
(1) If you are still unsure, create a new variable m=log(n), is it clear than r*m is not in O(log(m))? Proving it is easy, if you want an exercise.

Big O notation algebra

I have a couple of questions regarding some algebra using big O notation:
if f(n)=O(g(n))
is log(f(n)) = O(log(g(n)))?
is N^{f(n)}=O(N^{g(n)})? (where N is any real number)
Is log(f(n)) = O(log(g(n))) ? No, it is not essential, for example:
f(n)= n and g(n) = n^2. Here f(n) = O(g(n))
Is N^{f(n)}=O(N^{g(n)}) ? No, this is also not true as
for two algorithms , the ratio may remain constant, but the ratio of each raised to certain power will never be constant.
Take
f
(
n
) = 2
n
and
g
(
n
) =
n
.
It is true that 2n is O(n). But consider
This limit is not bounded - it goes to infinity as n goes to infinity. So,
2^2n is not O(2n) i.e. 2f(n) is not O(2g(n)) in this case.

Exponentials: Little Oh [duplicate]

This question already has answers here:
Difference between Big-O and Little-O Notation
(5 answers)
Closed 8 years ago.
What does nb = o(an) (o is little oh) mean, intuitively? I am just beginning to self teach my self algorithms and I am having hard time interpreting such expressions every time I see one. Here, the way I understood is that for the function nb, the rate of growth is an. But this is not making sense to me regardless of being right or wrong.
f(n)=o(g(n)) means that f(n)/g(n)->0 when n->infinite.
For your problem,it should hold a>1. (n^b)/(a^n)->0 when n->infinite, since (n^b)/(sqrt(a)^n*sqrt(a)^n))=((n^b)/sqrt(a)^n) * (1/sqrt(a)^n). Let f(n)=((n^b)/sqrt(a)^n) is a function increase first and then decrease, so you can get the maximum value of max(f(n))=M, then (n^b)/(a^n) < M/(sqrt(a)^n), since a>1, sqrt(a)>1, so (sqrt(a)^n)->infinite when n->infinite. That is M/(sqrt(a)^n)->0 when n->infinite, At last, we get (n^b)/(a^n)->0 when n->infinite. That is n^b=o(a^n) by definition.
(For simplicity I'll assume that all functions always return positive values. This is the case for example for functions measuring run-time of an algorithm, as no algorithm runs in "negative" time.)
First, a recap of big-O notation, to clear up a common misunderstanding:
To say that f is O(g) means that f grows asymptotically at most as fast as g. More formally, treating both f and g as functions of a variable n, to say that f(n) is O(g(n)) means that there is a constant K, so that eventually, f(n) < K * g(n). The word "eventually" here means that there is some fixed value N (which is a function of K, f, and g), so that if n > N then f(n) < K * g(n).
For example, the function f(n) = n + 2 is O(n^2). To see why, let K = 1. Then, if n > 10, we have n + 2 < n^2, so our conditions are satisfied. A few things to note:
For n = 1, we have f(n) = 3 and g(n) = 1, so f(n) < K * g(n) actually fails. That's ok! Remember, the inequality only needs to hold eventually, and it does not matter if the inequality fails for some small finite list of n.
We used K = 1, but we didn't need to. For example, K = 2 would also have worked. The important thing is that there is some value of K which gives us the inequality we want eventually.
We saw that n + 2 is O(n^2). This might look confusing, and you might say, "Wait, isn't n + 2 actually O(n)?" The answer is yes. n + 2 is O(n), O(n^2), O(n^3), O(n/3), etc.
Little-o notation is slightly different. Big-O notation, intuitively, says that if f is O(g), then f grows asymptotically at most as fast as g. Little-o notation says that if f is o(g), then f grows asymptotically strictly slower than g.
Formally, f is o(g) if for any (let's say positive) choice of K, eventually the inequality f(n) < K * o(g) holds. So, for instance:
The function f(n) = n is not o(n). This is because, for K = 1, there is no value of n so that f(n) < K * g(n). Intuitively, f and g grow asymptotically at the same rate, so f does not grow strictly slower than g does.
The function f(n) = n is o(n^2). Why is this? Pick your favorite positive value of K. (To see the actual point, try to make K small, for example 0.001.) Imagine graphing the functions f(n) and K * g(n). One is a straight line through the origin of positive slope, and the other is a concave-up parabola through the origin. Eventually the parabola will be higher than the line, and will stay that way. (If you remember your pre-calc/calculus...)
Now we get to your actual question: let f(n) = n^b and g(n) = a^n. You asked why f is o(g).
Presumably, the author of the original statement treats a and b as constant, positive real numbers, and moreover a > 1 (if a <= 1 then the statement is false).
The statement, in Engish, is:
For any positive real number b, and any real number a > 1, the function n^b grows asymptotically strictly slower than a^n.
This is an important thing to know if you are ever going to deal with algorithmic complexity. Put simpler, one can say "polynomials grow much slower than exponential functions." It isn't immediately obvious that this is true, and is too much to write out, so here is a reference:
https://math.stackexchange.com/questions/55468/how-to-prove-that-exponential-grows-faster-than-polynomial
Probably you will have to have some comfort with math to be able to read any proof of this fact.
Good luck!
The super high level meaning of the statement nb is o(an) is just that exponential functions like an grow much faster than polynomial functions, like nb.
The important thing to understand when looking at big O and little o notation is that they are both upper bounds. I'm guessing that's why you're confused. nb is o(an) because the growth rate of an is much bigger. You could probably find a tighter little o upper bound on nb (one where the gap between the bound and the function is smaller) but an is still valid. It's also probably worth looking at the difference between Big O and little o.
Remember that a function f is Big O of a function g if for some constant k > 0, you can eventually find a minimum value for n so that f(n) ≤ k * g(n).
A function f is little o of a function g if for any constant k > 0 you can eventually find a minimum value for n so that f(n) ≤ k * g(n).
Note that the little o requirement is harder to fulfill, meaning that if a function f is little o of a function g, it is also Big O of g, and it means the function g grows faster than if it were just Big O of g.
In your example, if b is 3 and a is 2 and we set k to 1, we can work out the minimum value for n so that nb ≤ k * an. In this case, it's between 9 and 10 since
9³ = 729 and 1 * 2⁹ = 512, which means at 9 an is not yet greater than nb
but
10³ = 1000 and 1 * 2¹⁰ = 1024, which means n is now greater than nb.
You can see graphing these functions that n will be greater than nb for any value of n > 10. At this point we've only shown that nb is Big O of n, since Big O only requires that for some value of k > 0 (we picked 1) an ≥ nb for some minimum n (in this case it's between 9 and 10)
To show that nb is little o of an, we would have to show that for any k greater than 0 you can still find a minimum value of n so that an > nb. For example, if you picked k = .5 the minimum of 10 we found earlier doesn't work, since 10³ = 1000, and .5 * 2¹⁰ = 512. But we can just keep sliding the minimum for n out further and further, the smaller you make k the bigger the minimum for n will b. Saying nb is little o of an means no matter how small you make k we will always be able to find a big enough value for n so that nb ≤ k * an

Resources