f(n)=n^log(n) complexity polynomial or exponential - complexity-theory

I'm trying to figure out whether f(n)=n^(logb(n)) is in Theta(n^k) and therefore grows polynomial or in Theta(k^n) and therefore grows exponentially.
First I tried to simplify the function:
f(n) = n^(logb(n)) = n^(log(n)/log(b)) = n^((1/log(b))*log(n)) and because 1/log(b) is constant we get f(n)=n^log(n).
But now I'm stuck. My guess is that f(n) grows exponentially in Theta(n^log(n)) or even hyper exponentially because the exponent log(n) is also growing.

You can write n^(log(n)) as (k^(logk(n)))^(log(n)) = k^(K*(log(n)^2)). Since (log(n))^2 < n for n large enough, then this means that n^(log(n)) will grow slower than k^n

Try substituting n with b^m, which makes logb(n) = m. That should get you an idea of where to go.

it seems like it's not theta(exponential) or theta(polynomial); the posters above showed why it is not exponential. The reason why it is not polynomial can be done with a simple proof by counter example.
proof that n^logn is not O(n^k):
suppose there is some k, with some c and n_0 such that for n>n_0, c*n^k > n^log n. this is the definition of O(n^k).
it is simple to find a n, with the constants, such that this doesn't hold.
if c>1, then that n is (ck)^ck.
if c<1, then that n is k^k.

Related

Which Big-O grows faster asymptotically

I have gotten into an argument/debate recently and I am trying to get a clear verdict of the correct solution.
It is well known that n! grows very quickly, but exactly how quickly, enough to "hide" all additional constants that might be added to it?
Let's assume I have this silly & simple program (no particular language):
for i from 0 to n! do:
; // nothing
Given that the input is n, then the complexity of this is obviously O(n!) (or even ϴ(n!) but this isn't relevant here).
Now let's assume this program:
for i from 0 to n do:
for j from 0 to n do:
for k from 0 to n! do:
; // nothing
Bob claims: "This program's complexity is obviously O(n)O(n)O(n!) = O(n!n^2) = O((n+2)!)."
Alice responds: "I agree with you bob, but actually it would be sufficient if you said that the complexity is O(n!) since O(n!n^k) = O(n!) for any k >= 1 constant."
Is Alice right in her note of Bob's analysis?
Alice is wrong, and Bob is right.
Recall an equivalent definition to big O notation when using limit:
f(n) is in O(g(n)) iff
lim_n->infinity: f(n)/g(n) < infinity
For any k>0:
lim_n->infinity: (n!*n^k) / n! = lim_n->infinity n^k = infinity
And thus, n!*n^k is NOT in O(n!)
Amit solution is perfect, I would only add more "human" solution, because understanding definition can be difficult for beginners.
The definition basically says - if you are increasing the value n and the methods f(n) and g(n) differs "only" k-times, where k is constant and does not change (for example g(n) is always ~100times higher, no matter if n=10000 or n=1000000), then these functions have same complexity.
If the g(n) is 100times higher for n=10000 and 80times higher for n=1000000, then f(n) has higher complexity! Because as the n grows and grows, the f(n) would eventually at some point reach the g(n) and then it will grow more and more compare to g(n). In complexity theories, you are interested in, how it will end in "infinity" (or more imaginable extremely HIGH values of n).
if you compare n! and n!*n^2, you can see, that for n=10, the second function has 10^2=100 times higher value. For n=1000, it has 1000^2=1000000 times higher value. And as you can imagine, the difference will grow.

What are sublinear algorithms?

I have been asked the following question by one of my fellow mates.
Which of the following expressions is not sublinear?
O(log log n)
O(n)
O(logn)
O(root(n))
I have gone through https://en.wikipedia.org/wiki/Time_complexity#Sub-linear_time but couldn't but I am not sure that I have understood it completely. Could someone point me in the right direction.
A function, f(x), is said to grow faster than another function, g(x), if the limit of their ratios as x approaches infinity goes to some positive number (or infinity), as seen in the definition below.
In the case of sublinear, we want to prove that a function grows slower than c*n, where c is some positive number.
Thus, for each function, f(n), in your list, we want the ratio of f(n) to (c*n). If the limit is 0, this means the function, f(n), is sublinear. Otherwise it grows at the same (approximate) speed of n or faster.
lim n->inf (log log n)/(c*n) = 0 (via l'Hopital's)
(sublinear)
lim n->inf (n)/(c*n) = 1/c != 0
(linear)
lim n->inf (log n)/(c*n) = 0 (via l'Hopital's)
(sublinear)
lim n->inf (sqrt(n))/(c*n) = 0
(sublinear)
I think I understood why you're confused: the wikipedia page you link uses Little-Oh notation:
Sub-linear time
An algorithm is said to run in sub-linear time (often spelled sublinear time) if T(n) = o(n)
Beware that T(n) = o(n) is a stronger requirement than saying T(n) = O(n).
In particular for a function in O(n) you can't always have the inequality
f(x) < k g(x) for all x > a
satisfied for every k you choose. y=x and k=1 will prove you wrong and little-oh notation requires every k to satisfy that expression.
Any O(n) function is not also in o(n). Thus your non-sublinear expression is O(n).
I recommend reading this answer to continue your studies

Big-O of `2n-2n^3`

I am working on an assignment but a friend of mine disagree with the answer to one part.
f(n) = 2n-2n^3
I find the complexity to be f(n) = O(n^3)
Am I wrong?
You aren't wrong, since O(n^3) does not provide a tight bound. However, typically you assume that f is increasing and try to find the smallest function g for which f=O(g) is true. Consider a simple function f=n+3. It's correct to say that f=O(n^3), since n+3 < n^3 for all n > 2 (just to pick an arbitrary constant). However, it's "more" correct to say that f=O(n), since n+3 < 2n for all n > 3, and this gives you a better feel for how f behaves as n increases.
In your case, f is decreasing as n increases, so it is true that f = O(g) for any function that stays positive as n increases. The "smallest" (or rather, slowest growing) such function is a constant function for some positive constant, and we usually write that as 2n - 2n^3 = O(1), since 2n - 2n^3 < 1 for all n>0.
You could even find some function of n that is decreasing as n increases, but decreases more slowly than your f, but such usage is rare. Big-O notation is most commonly used to describe algorithm running times as the input size increases, so n is almost universally assumed to be positive.

Are 2^n and n*2^n in the same time complexity?

Resources I've found on time complexity are unclear about when it is okay to ignore terms in a time complexity equation, specifically with non-polynomial examples.
It's clear to me that given something of the form n2 + n + 1, the last two terms are insignificant.
Specifically, given two categorizations, 2n, and n*(2n), is the second in the same order as the first? Does the additional n multiplication there matter? Usually resources just say xn is in an exponential and grows much faster... then move on.
I can understand why it wouldn't since 2n will greatly outpace n, but because they're not being added together, it would matter greatly when comparing the two equations, in fact the difference between them will always be a factor of n, which seems important to say the least.
You will have to go to the formal definition of the big O (O) in order to answer this question.
The definition is that f(x) belongs to O(g(x)) if and only if the limit limsupx → ∞ (f(x)/g(x)) exists i.e. is not infinity. In short this means that there exists a constant M, such that value of f(x)/g(x) is never greater than M.
In the case of your question let f(n) = n ⋅ 2n and let g(n) = 2n. Then f(n)/g(n) is n which will still grow infinitely. Therefore f(n) does not belong to O(g(n)).
A quick way to see that n⋅2ⁿ is bigger is to make a change of variable. Let m = 2ⁿ. Then n⋅2ⁿ = ( log₂m )⋅m (taking the base-2 logarithm on both sides of m = 2ⁿ gives n = log₂m ), and you can easily show that m log₂m grows faster than m.
I agree that n⋅2ⁿ is not in O(2ⁿ), but I thought it should be more explicit since the limit superior usage doesn't always hold.
By the formal definition of Big-O: f(n) is in O(g(n)) if there exist constants c > 0 and n₀ ≥ 0 such that for all n ≥ n₀ we have f(n) ≤ c⋅g(n). It can easily be shown that no such constants exist for f(n) = n⋅2ⁿ and g(n) = 2ⁿ. However, it can be shown that g(n) is in O(f(n)).
In other words, n⋅2ⁿ is lower bounded by 2ⁿ. This is intuitive. Although they are both exponential and thus are equally unlikely to be used in most practical circumstances, we cannot say they are of the same order because 2ⁿ necessarily grows slower than n⋅2ⁿ.
I do not argue with other answers that say that n⋅2ⁿ grows faster than 2ⁿ. But n⋅2ⁿ grows is still only exponential.
When we talk about algorithms, we often say that time complexity grows is exponential.
So, we consider to be 2ⁿ, 3ⁿ, eⁿ, 2.000001ⁿ, or our n⋅2ⁿ to be same group of complexity with exponential grows.
To give it a bit mathematical sense, we consider a function f(x) to grow (not faster than) exponentially if exists such constant c > 1, that f(x) = O(cx).
For n⋅2ⁿ the constant c can be any number greater than 2, let's take 3. Then:
n⋅2ⁿ / 3ⁿ = n ⋅ (2/3)ⁿ and this is less than 1 for any n.
So 2ⁿ grows slower than n⋅2ⁿ, the last in turn grows slower than 2.000001ⁿ. But all three of them grow exponentially.
You asked "is the second in the same order as the first? Does the additional n multiplication there matter?" These are two different questions with two different answers.
n 2^n grows asymptotically faster than 2^n. That's that question answered.
But you could ask "if algorithm A takes 2^n nanoseconds, and algorithm B takes n 2^n nanoseconds, what is the biggest n where I can find a solution in a second / minute / hour / day / month / year? And the answers are n = 29/35/41/46/51/54 vs. 25/30/36/40/45/49. Not much difference in practice.
The size of the biggest problem that can be solved in time T is O (ln T) in both cases.
Very Simple answer is 'NO'
see 2^n and n.2^n
as seen n.2^n > 2^n for any n>0
or you can even do it by applying log on both sides then you get
n.log(2) < n.log(2) + log(n)
hence by both type of analysis that is by
substituting a number
using log
we see that n.2^n is greater than 2^n as visibly seen
so if you get a equation like
O ( 2^n + n.2^n ) which can be replaced as O ( n.2^n)

Asymptotic Complexity of Logarithms and Powers

So, clearly, log(n) is O(n). But, what about (log(n))^2? What about sqrt(n) or log(n)—what bounds what?
There's a family of comparisons like this:
nᵃ (vs.) (log(n))ᵇ
I run into these comparisons a lot, and I've never come up with a good way to solve them. Hints for tactics for solving the general case?
[EDIT: I'm not talking about the computational complexity of calculating the values of these functions. I'm talking about the functions themselves. E.g., f(n) = n is an upper bound on g(n) = log(n) because f(n) ≤ c g(n) for c = 1 and n₀ > 0.]
log(n)^a is always O(n^b), for any positive constants a, b.
Are you looking for a proof? All such problems can be reduced to seeing that log(n) is O(n), by the following trick:
log(n)^a = O(n^b) is equivalent to:
log(n) = O(n^{b/a}), since raising to the 1/a power is an increasing function.
This is equivalent to
log(m^{a/b}) = O(m), by setting m = n^{b/a}.
This is equivalent to log(m) = O(m), since log(m^{a/b}) = (a/b)*log(m).
You can prove that log(n) = O(n) by induction, focusing on the case where n is a power of 2.
log n -- O(log n)
sqrt n -- O(sqrt n)
n^2 -- O(n^2)
(log n)^2 -- O((log n)^2)
n^a versus (log(n))^b
You need either bases or powers the same. So use your math to change n^a to log(n)^(whatever it gets to get this base) or (whatever it gets to get this power)^b. There is no general case
I run into these comparisons a lot (...)
Hints for tactics for solving the general case?
As you as about general case and that you following a lot into such questions. Here is what I recommend :
Use limit definition of BigO notation, once you know:
f(n) = O(g(n)) iff limit (n approaches +inf) f(n)/g(n) exists and is not +inf
You can use Computer Algebra System, for example opensource Maxima, here is in Maxima documentation about limits .
For more detailed info and example - check out THIS answer

Resources