Improving complex - algorithm

I am having a confusion. If I have to prove,
Now, in this, if I calculate the limit,
By this can I Say that this does belongs to big-o(4n).
Be
Which is not true for any value of n.
Is this the correct way of proving?

A constant doesn't influence the O time complexity.
I mean O(2*n) = 2*O(n) = O(n).
If 2n+1 is in O(4n) => 2n+1 is in O(n).
Because lim(n->infinite)(2n+1)/n = 2 is a finite number => 2n+1 is in O(n).

according to definition, if there is a c constant which hold for f(n) <= c*g(n), then f(n) belongs to g(n).
so indeed, (2n+1) belongs to O(4n) because there are the constants 1 and 4 which hold:
1*(2n+1) <= 4n <= 4*(2n+1)
(This is also show that (4n) belongs to O(2n+1). this is because they are both O(n))

There is no best constant. As you've observed, 2 is not going to work. For every epsilon > 0, however, the constant 2 + epsilon will work. This can be proved in general by expanding the definition of limit. The condition for the limit of f(n)/g(n) to approach c as n goes to infinity is that, for every epsilon > 0, there exists n0 such that, for all n > n0, we have |f(n)/g(n) - c| < epsilon, which implies f(n)/g(n) < c + epsilon, which implies that f(n) < (c + epsilon) g(n). It follows that f(n) is O(g(n)), with big-O constant c + epsilon.

Related

Asymptotic bounds and Big Θ notation

Suppose that f(n)=4^n and g(n)=n^n, will it be right to conclude that f(n)=Θ(g(n)).
In my opinion it's a correct claim but I'm not 100% sure.
It is incorrect. f(n) = Theta(g(n)) if and only if both f(n) = O(g(n)) and g(n) = O(f(n)). It is true that f(n) = O(g(n)). We will show that it is not the case that g(n) = O(f(n)).
Assume g(n) = O(f(n)). Then there exists a positive real constant c and a positive natural number n0 such that for all n > n0, g(n) <= c * f(n). For our functions, this implies n^n <= c * 4^n. If we take the nth root of both sides of this inequality we get n <= 4c^(1/n). We are free to assume c >= 1 and n0 >= since if we found a smaller value that worked a larger value would work too. For all c > 1 and n > 1, 4c^(1/n) is strictly less than 4c. But then if we choose n > 4c, the inequality is false. So, there cannot be an n0 such that for all n at least n0 the condition holds. This is a contradiction; our initial assumption is disproven.

Advice in proving Big O complexities

I am learning about complexity theory and have a question asking to show truth/falsehood of a number of Big-O statements.
I've done the first few e.g. showing 2^(n+1) is in O(2^n) by finding a constant and N value. But now they are asking more abstract things, for example:
If f(n) is O(g(n)), is log f(n) in O(log g(n))?
Is 2^(f(n)) in O(2^(g(n)))
These both seem like they would be true but I don't know how to express them formally with a constant and a N value. If you can give an example of how I could show these I can go do the rest of the problems.
The comments are both accurate. Here are some notes along the lines you are probably looking for.
Assume f(n) is O(g(n)). Then there exist n0 and c such that f(n) <= cg(n) for n >= n0. Take the logarithm of both sides. log(f(n)) <= log(cg(n)). We can use the laws of logarithms to rewrite this as log(f(n)) <= log(c) + log(g(n)). If g(n) is greater than 1, then log(c) + log(g(n)) <= (1+log(c))*log(g(n)), so we can choose c' = 1 + log(c) and get the desired result. Otherwise, note that for g(n) = 1 we're still good since any choice for c' works.
The second one is not true. Choose f(n) = 2n and g(n) = n. We see f(n) is O(g(n)) by choosing c = 3. However, 2^(2n) = 4^n is not O(2^n). To see that, assume we had n0 and c. Then 4^n <= c*2^n. Dividing by 2^n gives 2^n <= c. But this can't be correct since n can increase indefinitely whereas c is fixed.

Meaning of "Multiplying a function by some constant doesn't change it's asymptotic behavior"

I have understood this roughly as given a function f(n), if I multiply it with some constant "K" & f(n) = O(g(n)) :: f(n) <= c.g(n) for some n>=n1 then if I make f(n) as Kf(n) then there must be some other constant c1 by which we can multiply g(n) and cap (put a higher limit) to Kf(n).
What I am finding difficult to understand is the proper mathematical explanation given in the book:
Let's assume we have a function f(n) which we know to be in O(g(n). This means, there exists an n0 and c, such that:
f(n) <= c * g(n) \forall n >= n0
Now, if we analyze K * f(n), it follows from the above definition that:
K * f(n) <= K * c * g(n) \forall n >= n0
We observe that K * c is again a constant. Hence, we can introduce another constant c' = K * c and write:
K * f(n) <= c' * g(n) \forall n >= n0
And this is exactly the big-O definition from above. Finally:
K * f(n) \in O(g(n))
It's just another constant.
it just means that no matter what the constant time is the asymptotic behavior stays the same. example:
int fact;
for (fact=1;n>1;n--) fact*=n;
is simple O(n) factorial with constant time c given by the time of single iteration of the loop and the multiplication fact*=n. Multiplying this by a constant means just that the c has changed for example:
int fact;
for (fact=1;n>1;n--)
if (bits(fact)+bits(n)<32)
fact*=n;
Now the c is bigger because I added condition into the loop which takes some time. But the O(n) stays. What changes is the runtime. On the other hand if I change the code like this:
int fact;
for (fact=1;n>1;n--)
if (bits(fact)+bits(n)<32) fact*=n;
else break;
Then I changed not only the constant time c but also the base asymptotic upper bound to O(1) because no matter how big n is this will stop when the result reached 32 bit limit which will be in constant time of steps for big enough n but this is not multiplying by a constant of coarse.

Confused on big O notation

According to this book, big O means:
f(n) = O(g(n)) means c · g(n) is an upper bound on f(n). Thus there exists some constant c such that f(n) is always ≤ c · g(n), for large enough n (i.e. , n ≥ n0 for some constant n0).
I have trubble understanding the following big O equation
3n2 − 100n + 6 = O(n2), because I choose c = 3 and 3n2 > 3n2 − 100n + 6;
How can 3 be a factor? In 3n2 − 100n + 6, if we drop the low order terms -100n and 6, aren't 3n2 and 3.n2 the same? How to solve this equation?
I'll take the liberty to slightly paraphrase the question to:
Why do and have the same asymptotic complexity.
For that to be true, the definition should be in effect both directions.
First:
let
Then for the inequality is always satisfied.
The other way around:
let
We have a parabola opened upwards, therefore there is again some after which the inequality is always satisfied.
Let's look at the definition you posted for f(n) in O(g(n)):
f(n) = O(g(n)) means c · g(n) is an upper bound on f(n). Thus there
exists some constant c such that f(n) is always ≤ c · g(n), for
large enough n (i.e. , n ≥ n0 for some constant n0).
So, we only need to find one set of constants (c, n0) that fulfils
f(n) < c · g(n), for all n > n0, (+)
but this set is not unique. I.e., the problem of finding the constants (c, n0) such that (+) holds is degenerate. In fact, if any such pair of constants exists, there will exist an infinite amount of different such pairs.
Note that here I've switched to strict inequalities, which is really only a matter of taste, but I prefer this latter convention. Now, we can re-state the Big-O definition in possibly more easy-to-understand terms:
... we can say that f(n) is O(g(n)) if we can find a constant c such
that f(n) is less than c·g(n) or all n larger than n0, i.e., for all
n>n0.
Now, let's look at your function f(n)
f(n) = 3n^2 - 100n + 6 (*)
Let's describe your functions as a sum of it's highest term and another functions
f(n) = 3n^2 + h(n) (**)
h(n) = 6 - 100n (***)
We now study the behaviour of h(n) and f(n), respectively:
h(n) = 6 - 100n
what can we say about this expression?
=> if n > 6/100, then h(n) < 0, since 6 - 100*(6/100) = 0
=> h(n) < 0, given n > 6/100 (i)
f(n) = 3n^2 + h(n)
what can we say about this expression, given (i)?
=> if n > 6/100, the f(n) = 3n^2 + h(n) < 3n^2
=> f(n) < c*n^2, with c=3, given n > 6/100 (ii)
Ok!
From (ii) we can choose constant c=3, given that we choose the other constant n0 as larger than 6/100. Lets choose the first integer that fulfils this: n0=1.
Hence, we've shown that (+) golds for constant set **(c,n0) = (3,1), and subsequently, f(n) is in O(n^2).
For a reference on asymptotic behaviour, see e.g.
https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/big-o-notation
y=3n^2 (top graph) vs y=3n^2 - 100n + 6
Consider the sketch above. By your definition, 3n^2 only needs to be bigger than 3n^2 - 100n + 6 for large enough n (i.e. , n ≥ n0 for some constant n0). Let that n0 = 5 in this case (it could be something a little smaller, but it's clear which graph is bigger by n=5 so we'll just go with that).
Clearly from the graph, 3n^2 >= 3n^2 - 100n + 6 in the range we've plotted. The only way for 3n^2 - 100n + 6 to get bigger than 3n^2 then is for it to grow more steeply.
But the gradients of 3n^2 and 3n^2 - 100n + 6 are 6n and 6n - 100 respectively, so 3n^2 - 100n + 6 can't grow more steeply, therefore must always be underneath.
So your definition holds - 3n^2 - 100n + 6 <= 3n^2 for all n>=5
I am not an expert, but this looks a lot similar to what we just had in our real analysis course.
Basically if you have something like f(n) = 3n^2 − 100n + 6, the "fastest growing" term "wins" the other terms, when you have really really big n.
So in this case 3n^2 surpasses what ever 100n is, when the n is really big.
Another example would be something like f(n) = n/n^2 or f(n) = n! * n^2.
The first one gets smaller, as n simply cannot "keep up" with n^2. In the second example n! clearly grows faster than n^2, so I guess the answer for that should be f(n) = n! then, because the n^2 technically stops mattering with big n.
And terms like +6, which have no n affecting them are constants and matter even less as they cannot grow even if n grows.
It is all about what happends when n is really big. If your n is 34934854385754385463543856, then n^2 is hell of a bigger than 100n, because n^2 = n * n = 34934854385754385463543856 * 34934854385754385463543856.

What is the difference between O(1) and Θ(1)?

I know the definitions of both of them, but what is the reason sometimes I see O(1) and other times Θ(1) written in textbooks?
Thanks.
O(1) and Θ(1) aren't necessarily the same if you are talking about functions over real numbers. For example, consider the function f(n) = 1/n. This function is O(1) because for any n ≥ 1, f(n) ≤ 1. However, it is not Θ(1) for the following reason: one definition of f(n) = Θ(g(n)) is that the limit of |f(n) / g(n)| as n goes to infinity is some finite value L satisfying 0 < L. Plugging in f(n) = 1/n and g(n) = 1, we take the limit of |1/n| as n goes to infinity and get that it's 0. Therefore, f(n) ≠ Θ(1).
Hope this helps!
Big-O notation expresses an asymptotic upper bound, whereas Big-Theta notation additionally expresses an asymptotic lower bound. Often, the upper bound is what people are interested in, so they write O(something), even when Theta(something) would also be true. For example, if you wanted to count the number of things that are equal to x in an unsorted list, you might say that it can be done in linear time and is O(n), because what matters to you is that it won't take any longer than that. However, it would also be true that it's Omega(n) and therefore Theta(n), since you have to examine all of the elements in the list - it can't be done in sub-linear time.
UPDATE:
Formally:
f in O(g) iff there exists a c and an n0 such that for all n > n0, f(n) <= c * g(n).
f in Omega(g) iff there exists a c and an n0 such that for all n > n0, f(n) >= c * g(n).
f in Theta(g) iff f in O(g) and f in Omega(g), i.e. iff there exist a c1, a c2 and an n0 such that for all n > n0, c1 * g(n) <= f(n) <= c2 * g(n).

Resources