How can we expand (logn)! and (logn!)! to determine the complexities? - algorithm

I know that log n! gives a complexity of O(nlogn) but how to exapnd the above? The second one may be simplified to (nlogn)!. Please clarify on this.

You could estimate upper and lower bounds for (log(n))! using the identity
together with product estimations.
For an upper bound:
For a lower bound:
Combined you will get:
So at least:
Obviously, the (in)equations are somehow 'odd' due to the non-integer index boundaries of the products.
Update:
The bound given by hwlau using the sterling approximation is lower (by sqrt(log(n))/n) and should be tight.

Update: No, you cannot use (N ln N)! in your second formula. The reason is explained below using the first case.
With the log version of Stirling approximation, we have
ln(z!) = (z+1/2) ln z - z + O(1)...
Note that the extra z is kept here, the reason will be obvious soon. Now if we let x = log N,
(ln N)! = x! = exp(ln x!)
~ exp((x+1/2) ln x - x) = x^(x+1/2) exp(-x)
= (ln N)^((ln N)+1/2) / N
The extra term we kept turns out to be a inverse of N, it definitely has effects on the complexity since we cannot simply throw away of exp of something. If we denote g(N) for the approximation above and f(N) = (ln N)!, then lim f(N)/g(N) = sqrt(2 pi) < inf, so f = O(g)
For the (ln N!)!, it is a bit complicated, I use mathematica to check for the limit, and it suggests that the expansion
ln(z!) ~ (z+1/2) ln z - z + ln(sqrt(2pi))
is enough. I don't have general rule for when we can stop. And in general, it may not be possible to use only finite terms. But in this case, we can.
In case you only need a loose bound, for the first formula, you can actually throw away of the -z term because (z+1/2) ln z > (z+1/2) ln z - z.

Related

Prove that $n! = \Omega(n^{100})$

I justed started studying the sorting algorithm, so i need help solving problems on (big Omega) $\Omega$
How can I Prove that $n! = \Omega(n^{100})$
I know that we write $f(x) = \Omega(g(x))$ if $g(x) = O(f(x))$. This means that there is a constant $c>0$ and a value $x_0$ such that $|f(X)| \ge cg(x)$ whenever $x>x_0$.
Hence from the definition above, I can write
$$n^100 = O(n!)$$
We can find a constant c and a value $x_0$ such that $n^100 \le O(n!)$ for all $x>x_0$.
We could take $c=1$ and $x_0=1$
I don't know if I am correct. Please how should I continue and complete the proof.
The meaning of n! being Ω(n**100) is that there is some c and some x₀ such that n! ≥ c n**100 for all x ≥ x₀. Your choice of c=x₀=1 says that 3! is bigger than 3^100, which it clearly isn't.
Think about how fast n! grows. (n + 1)! is n + 1 times bigger than n!
Think about how fact n**100 grows. (n+1)**100 is ((n + 1)/n) ** 100 bigger than n*100. For large n, that number is going to get closer and closer to 1.

How to simplify O(2^(logN)) to O(N)

In Cracking the Coding Interview there's an example where the runtime for a recursive algorithm that counts the nodes in a binary search tree is O(2^(logN)). The book explains how we simplify to get O(N) like so...
2^p = Q
logQ = P
Let P = 2^(logN).
but I am lost at the step when they say Let P = 2^(logN). I don't understand how we know to set those two equal to one another, and I also don't understand this next step... (Although they tell me they do it by the definition of log base 2)
logP = logN
P = N
2^(logN) = N
Therefore the runtime of the code is O(N)
Assuming logN is log2N
This line:
Let P = 2^(logN).
Just assumes that P equals to 2^(logN). You do not know N yet, you just define how P and N relates to each other.
Later, you can apply log function to both sides of equation. And since log(2^(logN)) is logN, the next step is:
logP = logN
And, obviously, when logP = logN, then:
P = N
And previously you assumed that P = 2^(logN), then:
2^(logN) = N
Moreover, all of this could be simplified to 2^logN = N by definition of the log function.
The short answer is that the original question probably implicitly assumed that the logarithm was supposed to be in base 2, so that 2^(log_2(N)) is just N, by definition of log_2(x) as the inverse function of 2^y.
However, it's interesting to examine this a bit more carefully if the logarithm is to a different base. Standard results allow us to write the logarithm to base b as follows:
where ln(x) is the natural logarithm (using base e). Similarly, one can rewrite 2^x as follows:
We can then rewrite the original order-expression as follows:
which can be reduced to:
So, if the base b of our logarithm is 2, then this is clearly just N. However, if the base is different, then we get N raised to a power. For example, if b=10 we get N raised to the power 0.301, which is definitely a more slowly increasing function than O(N).
We can check this directly with the following Python script:
import numpy
import matplotlib.pyplot as plt
N = numpy.arange(1, 100)
plt.figure()
plt.plot(N, 2**(numpy.log2(N)))
plt.xlabel('N')
plt.ylabel(r'$2^{\log_2 N}$')
plt.figure()
plt.plot(N, 2**(numpy.log10(N)))
plt.xlabel('N')
plt.ylabel(r'$2^{\log_{10} N}$')
plt.show()
The graph this produces when we assume that the logarithm is to base two:
is very different from the graph when the logarithm is taken to base ten:
The definition of logarithm is “to what power does the base need to be raised to get this value” so if the base of the logarithm is 2, then raising 2 to that power brings us to the original value.
Example: N is 256. If we take the base 2 log of it we get 8. If we raise 2 to the power of 8 we get 256. So it is linear and we can make it to be just N.
If the log would be in a different base, for example 10, the conversion would just require dividing the exponent with a constant, making the more accurate form into N = 2^(log N / log 2), which can be changed into N / 2^(1 / log 2) = 2^log N. Here the divider for N on the left is a constant so we can forget it when discussing complexity and again come to N = 2^log N.
You can also test it by hand. Log2 of 256 is 8. Log2 of 128 is 7. 8/7 is about 1.14. Log10 of 256 is 2.4. Log10 of 128 is 2.1. 2.4/2.1 is about 1.14. So the base doesn’t matter, the value you get out isn’t the same but it is linear. So mathematically N doesn’t equal to 2^Log10 N, but in complexity terms it does.

Which f(x) minimizes the order of g(f(x)) as x goes to infinity

Assume f(x) goes to infinity as x tends to infinity and a,b>0. Find the f(x) that yields the lowest order for
as x tends to infinity.
By order I mean Big O and Little o notation.
I can only solve it roughly:
My solution: We can say ln(1+f(x)) is approximately equal to ln(f(x)) as x goes to infinity. Then, I have to minimize the order of
Since for any c>0, y+c/y is miminized when y =sqrt(c), b+ln f(x)}=sqrt(ax) is the anwer. Equivalently, f(x)=e^(sqrt(ax)-b) and the lowest order for g(x) is 2 sqrt(ax).
Can you help me obtain a rigorous answer?
The rigorous way to minimize (I should say extremize) a function of another function is to use the Euler-Lagrange relation:
Thus:
Taylor expansion:
If we only consider up to "constant" terms:
Which is of course the result you obtained.
Next, linear terms:
We can't solve this equation analytically; but we can explore the effect of a perturbation in the function f(x) (i.e. a small change in parameter to the previous solution). We can obviously ignore any linear changes to f, but we can add a positive multiplicative factor A:
sqrt(ax) and Af are obviously both positive, so the RHS has a negative sign. This means that ln(A) < 0, and thus A < 1, i.e. the new perturbed function gives a (slightly) tighter bound. Since the RHS must be vanishingly small (1/f), A must not be very much smaller than 1.
Going further, we can add another perturbation B to the exponent of f:
Since ln(A) and the RHS are both vanishing small, the B-term on LHS must be even smaller for the sign to be consistent.
So we can conclude that (1) A is very close to 1, (2) B is much smaller than 1, i.e. the result you obtained is in fact a very good upper bound.
The above also leads to the possibility of even tighter bounds for higher powers of f.

How do you calculate big O on a function with a hard limit?

As part of a programming assignment I saw recently, students were asked to find the big O value of their function for solving a puzzle. I was bored, and decided to write the program myself. However, my solution uses a pattern I saw in the problem to skip large portions of the calculations.
Big O shows how the time increases based on a scaling n, but as n scales, once it reaches the resetting of the pattern, the time it takes resets back to low values as well. My thought was that it was O(nlogn % k) when k+1 is when it resets. Another thought is that as it has a hard limit, the value is O(1), since that is big O of any constant. Is one of those right, and if not, how should the limit be represented?
As an example of the reset, the k value is 31336.
At n=31336, it takes 31336 steps but at n=31337, it takes 1.
The code is:
def Entry(a1, q):
F = [a1]
lastnum = a1
q1 = q % 31336
rows = (q / 31336)
for i in range(1, q1):
lastnum = (lastnum * 31334) % 31337
F.append(lastnum)
F = MergeSort(F)
print lastnum * rows + F.index(lastnum) + 1
MergeSort is a standard merge sort with O(nlogn) complexity.
It's O(1) and you can derive this from big O's definition. If f(x) is the complexity of your solution, then:
with
and with any M > 470040 (it's nlogn for n = 31336) and x > 0. And this implies from the definition that:
Well, an easy way that I use to think about big-O problems is to think of n as so big it may as well be infinity. If you don't get particular about byte-level operations on very big numbers (because q % 31336 would scale up as q goes to infinity and is not actually constant), then your intuition is right about it being O(1).
Imagining q as close to infinity, you can see that q % 31336 is obviously between 0 and 31335, as you noted. This fact limits the number of array elements, which limits the sort time to be some constant amount (n * log(n) ==> 31335 * log(31335) * C, for some constant C). So it is constant time for the whole algorithm.
But, in the real world, multiplication, division, and modulus all do scale based on input size. You can look up Karatsuba algorithm if you are interested in figuring that out. I'll leave it as an exercise.
If there are a few different instances of this problem, each with its own k value, then the complexity of the method is not O(1), but instead O(k·ln k).

Algorithm bound analysis and using integrals

I am supposed to get a lower and an upper bound of an alogrithm using integrals, but I don't understand how to do that. I know basic integration principles but I don't know how to figure out the integral from the algorithm.
Problem:
My first for loop starts at i = 5n ---> and goes to 6n cubed,
the next one inside would be starting at j=5 ---> and going to i,
then the final next for loop would be starting at k=j and going to i.
Naturally, my first step was to turn this into 3 summations. So I have my 3 summations set up and what I'm wanting to do is simplify these to just one summation if possible. That way if I have some variables to the right of my summation I can now take the integral.
In terms of the bounds I'm using for my integral, from Introduction to Algorithms by Cormen, Leiserson, etc. you can do approximation by integrals.
Nature of the integrals:
For the upper bound your bounds of the integral may be: upper bound n+1, lower bound m.
For the lower bound your bounds of your integral may be: upper bound n, lower bound m-1.
I want to know how to simplify my three summations into one if possible. If things are to one summation I can start to take the integral and go from there myself.
This is very rough pseudo code, but I tried my best to make it look similar to actual code.
for(i = 5n; i<6n^3; i++)
{
for(j =5; j<i; j++)
{
for(k=j; k < i; k++)
{
i - j + k;
}
}
}
Let any of int(i,j,f) or int(x=i,j,f(x)) or ∫(x=i,j,f(x)) denote the definite integral of f(x) as x ranges from i to j. If f(x) is the amount of work done (by a program) when x has a particular value, and if f is a monotonically increasing function, then as you point out in the question, int(m,n+1,f) is an upper bound, and int(m+1,n,f) a lower bound, on the work done as x takes the values m...n. In following, I will say that int(m,n,f) approximates the work, and you can add +1 terms where appropriate to get upper and lower bounds. Note, 6n^3-1 stands for 6*(n^3)-1, 5n for 5*n, etc.
Approximate work is:
int(i=5n, 6n^3-1, u(i))
where u(i) is
int(j=5, i-1, v(i,j))
where v(i,j) is
int(k=j, i-1, w(k))
where w(k) is 1. In following we use functions p, q, r to stand for indefinite integrals, and C for constants of integration that cancel out for definite integrals.
Let r(x) = ∫1dx = x + C.
Now v(i,j) = ∫(k=j, i-1, 1) = r(i-1)-r(j) = i-1-j.
Let q(x,i) = ∫(i-1-x)dx = x*(i-1)-x*x/2 + C.
Now u(i) = ∫(j=5, i-1, i-1-j) = q(i-1,i)-q(5,i)
which is some quadratic in i. You will need to work out the details for the upper and lower bound cases.
Let p(x) = ∫u(x)dx = ∫(q(x-1,x)-q(5,x)),
which is some cubic in x. The overall result is
p(6n^3-1)-p(5n)
and again you will need to work out the details. But note that when 6n^3-1 is substituted for x in p(x), the order is going to be (6n^3-1)^3, that is, O(n^9), so you should expect upper and lower bound expressions that are O(n^9). Note, you can also see the O(n^9) order by inspecting your loops: In for(i=5n; i<6n^3; i++), i will average about 3n^3. In for(j =5; j<i; j++), j will average about i/2, or some small multiple of n^3. In for(k=j; k < i; k++), k-j also will average a small multiple of n^3. Thus, expression i-j+k will be evaluated some small multiple of n^3*n^3*n^3, or n^9, times.

Resources