Is log(n-f(n)) big theta of log(n) - algorithm

The problem is that I need to know if log(n-f(n)) is big theta of log(n), where f(n) is a lower order function than n, e.g., log(n) or sqrt(n).
I tried to use some log rules and plotting seems to confirm the bound, but I can't get it exactly.

As f(n) is a lower order function than n, f(n) = o(n). Hence, n-o(n) < 2n and n - o(n) = O(n). Also, n - o(n) > n - 0.01 n <=> 0.01 n > o(n) (0.01 can be specified with the o(n)). Therfore, n - o(n) = Omega(n), and n-o(n) = Theta(n).
As log function is an increasing function we can say log(n-o(n)) = Theta(log(n)).

Related

How to approve or disprove a statement-Time Complexity

For all functions f, log_2(f(n)) + O(n) = O(n).
I have tried disapproving it by taking the limit. But got infinity as a result. Is it right?
The statement is not true. As a counterexample f(n) = n^n. Therefore, log(f(n)) = n log(n) and n log n + O(n) is not in O(n).

Complexity of f(k) when f(n) = O(n!) and k=n*(n-1)

I have the following problem. Let's suppose we have function f(n). Complexity of f(n) is O(n!). However, there is also parameter k=n*(n-1). My question is - what is the complexity of f(k)? Is it f(k)=O(k!/k^2) or something like that, taking into consideration that there is a quadratic relation between k and n?
Computational complexity is interpreted base on the size of the input. Hence, if f(n) = O(n!) when your input is n, then f(k) = O(k!) when your input is k.
Therefore, you don't need to compute the complexity for each value of input for the function f(n). For example, f(2) = O(2!), you don't need to compute the complexity of f(10) likes O((5*2)!) as 10 = 5 * 2, and try to simplify it base on the value of 2!. We can say f(10) = O(10!).
Anyhow, if you want compute (n*(n-1))! = (n^2 - n)!(n^2 - n + 1)...(n^2 - n + n) /(n^2 - n + 1)...(n^2 - n + n) = (n^2)!/\theta(n^3) = O((n^2)!/n^(2.9))
Did you consider that there is a m, such that the n you used in your f(n) is equal to m * (m - 1).
Does that change the complexity?
The n in f(n) = O(n!) represents all the valid inputs.
You are trying to pass a variable k whose actual value in terms of another variable is n * (n - 1). That does not change the complexity. It will be O(k!) only.

Asymptotic Complexity comparison

Can anybody explain which one of them has highest asymptotic complexity and why,
10000000n vs 1.000001^n vs n^2
You can use standard domination rules from asymptotic analysis.
Domination rules tell you that when n -> +Inf, n = o(n^2). (Note the difference between the notations O(.) and o(.), the latter meaning f(n) = o(g(n)) iff there exists a sequence e(n) which converges to 0 as n -> +Inf such that f(n) = e(n)g(n). With f(n) = n, g(n) = n^2, you can see that f(n)/g(n) = 1/n -> 0 as n -> +Inf.)
Furthermore, you know that for any integer k and real x > 1, we have n^k/x^n -> 0 as n -> +Inf. x^n (exponential) complexity dominates n^k (polynomial) complexity.
Therefore, in order of increasing complexity, you have:
n << n^2 << 1.000001^n
Note:10000000n could be written O(n) with the loose written conventions used for asymptotic analysis in computer science. Recall that the complexity C(n) of an algorithm is O(n) (C(n) = O(n)) if and only if (iff) there exists an integer p >= 0 and K >= 0 such that for all n >= p the relation |C(n)| <= K.n holds.
When calculating asymptotic time complexity, you need to ignore all coefficients of n and just focus on its exponent.
The higher the exponent, the higher the time complexity.
In this case
We ignore the coefficients of n, leaving n^2, x^n and n.
However, we ignore the second one as it has an exponent of n. As n^2 is higher than n, the answer to your question is n^2.

If g(n) = sqrt(n)^sqrt(n), does the complexity of g(n) = O(2^n)?

If g(n) = sqrt(n)sqrt(n), does the complexity of g(n) = O(2n)?
Any help is appreciated.
A useful technique when comparing two exponential functions is to get them to have the same base:
√n√n = (2lg √n)√n = 2√n lg √n
Now you're comparing 2√n lg √n against 2n, and hopefully from that it's easy to see that the former function does not grow as rapidly as the latter, so √n√n = O(2n) is indeed true.
The other proofs are short and nice, but here is more detailed proof going to the definitions of the big-oh notations and the computation of the needed limits.
A function g(n) is upper-bounded by another function f(n) by the big-Oh notation (g(n) = O(f(n))) if it holds that
(source)
Put in the functions and we must compute
First some algebraic massage on the g(n) term. By the root-identities, it holds that sqrt(n) = n^(1/2). Furthermore it holds that (x^a)^b = x^(a*b). With that:
Furthermore, 2^n is exp(log( 2^n )) by the logarithmic identities, and then log(a^b) = b*log(a) we have 2^n = exp(log( 2^n )) = exp(n * log(2)). The same can be applied to n^(1/2 sqrt(n)), it becomes exp(log( n^(1/2 sqrt(n)) = exp(1/2*sqrt(n)*log(n)). So now we have
At this point we can compare the growth of the exponents, i.e. compare
That limit is 0 because const * n grows faster than sqrt(n)*log(n). This can in turn be shown the calculating the limit explicitly. Put the 1/2 and the log2 in the denumerator. Since n = sqrt(n) * sqrt(n), we can simplify it to:
This limit is indeed zero, because the squareroot grows faster than logarithm by the Orders of common functions. Thus the exponent of the lower function grows faster than the exponent of the upper function. Thus g(n) = O(2^n) is rigorously proven by the first theorem.
One can assume O(log n) < O(sqrt(n)) (Order of common functions - wikipedia)
The transformation works as follows:
sqrt(n)^sqrt(n) 2^n # a^b = e^(ln(a) * b)
e^(ln(sqrt(n)) * sqrt(n)) e^(ln(2) * n) # if e^a < e^b, then a < b
ln(sqrt(n)) * sqrt(n) ln(2) * n # / sqrt(n)
ln(sqrt(n)) ln(2) * sqrt(n) # ln(a^b) = b * ln(a)
0.5 ln(n) ln(2) * sqrt(n) # ln(a) / ln(b) = log(a base b)
0.5 log(n base 2) sqrt(n) # base and constant factor don't matter
log(n) sqrt(n)
I've omitted complexity-classes for simplicity. The above should be read bottom to top for a proper proof.

Is the big-O complexity of these functions correct?

I am learning about algorithm complexity, and I just want to verify my understanding is correct.
1) T(n) = 2n + 1 = O(n)
This is because we drop the constants 2 and 1, and we are left with n. Therefore, we have O(n).
2) T(n) = n * n - 100 = O(n^2)
This is because we drop the constant -100, and are left with n * n, which is n^2. Therefore, we have O(n^2)
Am I correct?
Basically you have those different levels determined by the "dominant" factor of your function, starting from the lowest complexity :
O(1) if your function only contains constants
O(log(n)) if the dominant part is in log, ln...
O(n^p) if the dominant part is polynomial and the highest power is p (e.g. O(n^3) for T(n) = n*(3n^2 + 1) -3 )
O(p^n) if the dominant part is a fixed number to n-th power (e.g. O(3^n) for T(n) = 3 + n^99 + 2*3^n)
O(n!) if the dominant part is factorial
and so on...

Resources