Is this function in the complexity? - complexity-theory

I am not sure about the following question:
Is loga(nb) in O(logb(na)) for constants a, b?

When asked if a function f(x) is in O(g(x)) it really compares the rate of growth of those two functions. (see wikipedia: http://en.wikipedia.org/wiki/Big_O_notation)
Constant factors of the functions are ignored so 2x is in O(x). Also components of the function that have lower growth rates are similarly ignored so 2x^2 + x + 1 is in O(x^2).
So the question is: does loga n^b have a similar growth rate as logb n^a?
To solve this we will apply a couple of awesome properties of logarithms:
log x^b = b log x
loga x = (logb x) / (logb a)
First thing to do is to fix the big O notation we are comparing to as it is not minimal, by applying the first property above we get:
O(logb n^a) = O(a logb n) because constant coeficient are removed from big O notations the real representation of the rate of growth is:
O(logb n).
Now applying the first identity to the first formula we have:
loga n^b = b loga n
next we change the base using the second property we get:
loga n^b = b (logb n) / (logb a)
this can also be organized to look like:
loga n^b = (b / logb a) logb n
note that (b / logb a) is a constant coeficient therefore (b / logb a) logb n is in O(logb n)
So the answer to the question is yes. loga n^b is in O(logb n^a).

Let us write the first expression as b*loga(n) and the second as a*logb(n).
The first one is equivalent to b*log(n)/log(a) and the second one is a*log(n)/log(b).
So, the hypothesis is: "Are there an integers n0 and k such that for all n>n0, b*log(n)/log(a) < k*a*log(n)/logb ?"
With a bit of simplification, that would be: "... b/log(a) < k*a/log(b) ?"
With further rearrangements, we have: "... b*log(b) < k*a*log(a) ?"
Hence, the answer depends on what "a" and "b" are. It is a "yes" if b <= a.

Related

simple g(n) such that f(n) = Θ(g(n))

f(n) = 4 * 2n + 4n + 20n5
So, g(n) = 4n
Now our f(n) = O(g(n))
4 * 2n + 4n + 20n5 ≤ c*4n
How do we do this? I know how to do it for simple cases, but this one is far more complex. Would it go along the lines of removing the constant 4 and 20n5 to then have 2n + 4n ≤ c*4n?
Or would it be for any c > 4*2n + 20n5. It feels like a lame answer, so i'm going to assume i'm wrong. Would prefer if someone hinted at the idea of how to solve these problems rather than give me the answer, thank you.
Hint / preparations
In the context of asymptotic analysis and, in this case, Big-O notation specifically; generally when wanting to prove that inequalities such as
4 * 2^n + 4^n + 20n^5 ≤ c*4^n, (+)
for some constant c > 0,
for n larger than some constant n0; n > n0
holds, we approach the left hand side expression term by term. Since we're free to choose any constants c and n0 to show that (+) holds, we can always express the lower order terms as less or equal to (≤) the higher order term by making n sufficiently large, e.g., choosing the value of n0 as we see fit.
Solution (spoilers ahead!)
Below follows one way to show that (+) holds for some set of positive constants c and n0. Since you only asked for hints, I suggest you start with the section above, and return to this section in case you get stuck or want to verify the derivation you ended up using.
Term by term analysis (in terms of 4^n) of the left hand side expression of(+)` follows.
Term 4 * 2^n:
4 * 2^n = 4^n <=> (2*2)*2^n = (2^2)^n <=> 2^(n+2) = 2^(2n)
<=> n+2 = 2n => n = 2
=> 4 * 2^n ≤ 4^n for n ≥ 2 (i)
Term 4^n: Trivial
Term 20n^5:
for which n is 20 * n^5 = 4^n?
Graphical solution:
=> 20 * n^5 ≤ 4^n for n ≥~ 10.7 (choose 11) (ii)
Inserting inequalities (i) and (ii) in the lhs of (+) yields:
4 * 2^n + 4^n + 20n^5 ≤ 4^n + 4^n + 4^n = 3*4^n
^
for n>max(2,11)=11 <-- choice of n0 |
choice of c
Hence, we have showed that (+) holds for constants n0 = 11 and c=3. Naturally, the choice of these constants is not unique (in fact, if such constants exists, an infinite amount of them exists). Subsequently, the lhs of (+) is in O(4^n).
Now, I note that your title mentions Big-Θ (whereas your question covers only Big-O). For deriving that lhs of (+) is Θ(4^n), we need to find also a lower asymptotic bound on the lhs of (+) in terms of 4^n. Since n > 0, this is, in this case, quite trivial:
4 * 2^n + 4^n + 20n^5 ≥ c2*4^n ? for n > n0 ? (++)
=> 4 * 2^n + 4^n + 20n^5 ≥ 4^n, for n > 0
I.e., in addition to showing that (+) holds (which implies O(4^n)), we've shown that (++) holds for e.g. c2 = 1 and (re-use) n0 = 11, which implies that lhs of (+) is Θ(4^n).
One way to approach an asymptotic analysis of a function such as the left hand side of (+) would be to make use of the somewhat rigorous term-by-term analysis shown in this solution. In practice, however, we know that 4^n will quickly dominate the lower order terms, so we could've just chosen a somewhat large n0 (say 100) and tested, term by term, if the lower order terms could be replaced by the higher order term with less or equal to (≤) relation, given n>n0. Or, given in what context we need to make use of our asymptotic bounds, we could just glance at the function and, without rigour, directly state that the asymptotic behaviour of the function is naturally O(4^n), due to this being the dominant term. This latter method should, imo, only be used after one has grasped how to formally analyse the asymptotic behaviour of functions and algorithms in the context of Big-O/-Omega and -Theta notation.
The formal definition is
O(g(n)) = {f(n) | ∃c, n₀ : 0 ≤ f(n) ≤ c g(n), ∀ n ≥ n₀}
But when you want to check if f(x) ∈ O(g(n)), you can use the simpler
f(n)
lim sup ────── ≤ c
n → ∞ g(n)
In this case,
4*2ⁿ + 4ⁿ + 20n⁵
lim sup ────────────────── = 1
n → ∞ 4ⁿ
So yes, we can choose for example c = 1.

Comparing O((logn)!) and O(2^n)

I am having a hard time comparing these two functions,
(logn)!
and
2^n
Any good mathematical proof?
You cannot compare O((logn)!) and O(2^n) since big O notation represents a set. O(g(n)) is the set of of all function f such that f does not grows faster than g, formally is the same is saying that there exists C and n0 such that we have |f(n)| <= C|g(n)| for every n >= n0. The expression f(n) = O(g(n)) is a shorthand for saying that f(n) is in the set O(g(n)). what we can do is check if 2^n=O((logn)!) or (log n)!=O(2^n) (note that it could be that both are not true). Luckily, if we use the Stirling approximation we get that
log((logn)!) = (logn)*(log (logn)) - logn + O(log(log n)) = O(n*(log 2))
since n * cost grows faster than (logn)*(log (logn)) and (logn)*(log (logn)) is the leading term in (logn)*(log (logn)) - logn + O(log(log n)). So we get that log((logn)!) = O(log(2^n)) which is same as saying that (log n)! = O(2^n)
One can easily show that for sufficiently large n it holds that:
log(n)! <= log(n)^{log(n)} <= n^{log(n)} = 2^{log^2(n)}
We can now only consider exponents of 2 in the 2^n and the expression above - n and log^2(n) respectively (we can do that since we consider only sufficiently large n and 2^x is strictly rising for positive x). It is sufficient to show that the limit below diverges to prove that log(n)! is, in fact, o(2^n):
lim[n -> inf] (n)/(log^2(n))
Now we apply L'Hospital rule:
= lim [n -> inf] `n/(2log(n))`
And again:
= lim [n -> inf] `n/(2)`
Which diverges to infinity.

(log n)^k = O(n)? For k greater or equal to 1

(log n)^k = O(n)? For k greater or equal to 1.
My professor presented us with this statement in class, however I am not sure what it means for a function to a have a time complexity of O(n). Even stuff like n^2 = O(n^2), how can a function f(x) have a run time complexity?
As for the statement how does it equal O(n) rather than O((logn)^k)?
(log n)^k = O(n)?
Yes. The definition of big-Oh is that a function f is in O(g(n)) if there exist positive constants N and c, such that for all n > N: f(n) <= c*g(n). In this case f(n) is (log n)^k and g(n) is n, so if we insert that into the definition we get: "there exist constants N and c, such that for all n > N: (log n)^k <= c*n". This is true so (log n)^k is in O(n).
how can a function f(x) have a run time complexity
It doesn't. Nothing about big-Oh notation is specific to run-time complexity. Big-Oh is a notation to classify the growth of functions. Often the functions we're talking about measure the run-time of certain algorithms, but we can use big-Oh to talk about arbitrary functions.
f(x) = O(g(x)) means f(x) grows slower or comparably to g(x).
Technically this is interpreted as "We can find an x value, x_0, and a scale factor, M, such that this size of f(x) past x_0 is less than the scaled size of g(x)." Or in math:
|f(x)| < M |g(x)| for all x > x_0.
So for your question:
log(x)^k = O(x)? is asking : is there an x_0 and M such that
log(x)^k < M x for all x>x_0.
The existence of such M and x_0 can be done using various limit results and is relatively simple using L'Hopitals rule .. however it can be done without calculus.
The simplest proof I can come up with that doesn't rely on L'Hopitals rule uses the Taylor series
e^z = 1 + z + z^2/2 + ... = sum z^m / m!
Using z = (N! x)^(1/N) we can see that
e^(x^(1/N)) = 1 + (N! x)^(1/N) + (N! x)^(2/N)/2 + ... (N! x)^(N/N)/N! + ...
For x>0 all terms are positive so, keeping only the Nth term we get that
e^((N! x)^(1/N)) = N! x / N! + (...)
= x + (...)
> x for x > 0
Taking logarithms of both sides (since log is monotonic increasing), then raising to Nth power (also monotonic increasing since N>0)
(N! x)^(1/N) > log x for x > 0
N! x > (log x)^n for x > 0
Which is exactly the result we need, (log x)^N < M x for some M and all x > x_0, with M = N! and x_0=0

(log(n))^log(n) and n/log(n), which is faster?

f(n)=(log(n))^log(n)
g(n)= n/log(n)
f = O(g(n))?
Take the log of both sides:
log(f(n)) = log(log n) * log n
log(g(n)) = log(n) - log(log(n)) = log(n)(1 - log(log(n))/log(n))
Clearly log(log(n)) dominates (1 - log(log(n))/log(n)), so g is O(f). f is not O(g). Since it's homework, you may need to fill in the details.
It's also fairly easily to get an idea what the answer should be just by trying it with a large number. 1024 is 2^10, so taking n=1024:
f(n) = 10^10
g(n) = 1024/10.
Obviously that's not a proof, but I think we can see who's winning this race.
f(n) grows faster than g(n) if and only if f(en) also grows faster than g(en) since exp is strictly increasing to infinity (prove it yourself).
Now f(en) = nn and g(en) = en / n, and you can quote the known results.
If Limit[f[x] / g[x], x -> Infinity] = Infinity, then f[x] grows faster than g[x].
Limit[Log[x] ^ Log[x] / (x / Log[x]), x -> Infinity] = + Infinity
So, Log[x] ^ Log[x] grows faster than x / Log[x]
Mathematica gives the limit of f(n) / g(n) as n tends towards infinity as infinity, which means that f grows faster. This means that g(n) belongs to (=) O(f(n)).
You can use this for example if you don't have Mathematica.
f is vastly bigger. By n^loglog(n) -1 . log n

Proving and Disproving BigO

In proving and disproving Big O questions that explicitly say use the definition to prove and disprove, my question is, is what I am doing correct?
For example you have a question that is g(n) = O(f(n)) ... In order to prove it I was doing the following
g(n) <= C(F(n))
g(n)/F(n) <= C .. then give n=1 and solve for C , which proves it.
The contradiction that I run to when doing this is when i approach a question of disproving this stuff
for example
g(n) =O(F(n)) to disprove it I would do
g(n) >= C(F(n)) and solve for C again . However this leads me to believe that big O can be proved and disproved at once ? which is 100% wrong.
Using real world numbers
(Proving)
n^2 + 3 = O(n^2)
(n^2 + 3)/n^2 <= C assume n = 1 then C >= 3
Disproving
n^2 + 3 = O(n^2)
(n^2 + 3)/n^2 >= C assume n = 1 then C <= 3
n^2 + 3 = O(n^2)
both of these say that # n =1 and c = 3 the algorithm is O(n^2) and is NOT O(n^2).
Can anyone help me clarify my confusion and give me a help me learn a good algorithmic way of solving big O questions?
Neither of your techniques work. Let's start with the definition of big-O:
f is O(g) iff there exist C, N such that |f(x)| ≤ C |g(x)| for all x ≥ N
To prove "there exist" type statements, you need to show that, well, the things exist. In the case of big-O proofs, you usually find the things, though proofs of existence don't generally need to be constructive. To build a proof for a "for all" statement, pretend someone just handed you specific values. Be careful you make no implicit assumptions about their properties (you can explicitly state properties, such as N > 0).
In the case of proving big-O, you need to find the C and N. Showing |g(n)| ≤ C|F(n)| for a single n isn't sufficent.
For the example "n2+3 is O(n2)":
For n ≥ 2, we have:
n2 ≥ 4 > 3
⇒ n2-1 > 2
⇒ 2(n2-1) > (n2-1)+2
⇒ 2n2 > (n2-1)+4 = n2+3
Thus n2+3 is O(n2) for C=2, N=2.
To disprove, you take the negation of the statement: show there is no C or N. In other words, show that for all C and N, there exists an n > N such that |f(n)| > C |g(n)|. In this case, the C and N are qualified "for all", so pretend they've been given to you. Since n is qualified "there exists", you have to find it. This is where you start with the equation you wish to prove and work backwards until you find a suitable n.
Suppose we want to prove that n is not O(ln n). Pretend we're given N and C, and we need to find an n ≥ N such that n > C ln n.
For all whole numbers C, N, let M=1+max(N, C) and n = eM. Note n > N > 0 and M > 0.
Thus n = eM > M2 = M ln eM = M ln n > C ln n. QED.
Proofs of x > 0 ⇒ ex > x2 and "n is not O(ln n)" ⇒ "n is not O(logb n)" left as exercises.

Resources