Calculating lower bound with stirlings approximation - approximation

We have this exercise in school, where we are to calculate the lower bound of an algorithm.
We know that the lower bound is: Log_6((3*n)! / n!^3) and we are to use stirlings approximation to approximate n!. When appling stirlings approximation we get:
log_6((sqrt(2*pi*3*n)*((3*n)/e)^(3*n) * e^alpha)/(sqrt(2*pi*n)*(n/e)^n * e^alpha)^3)
Now our problem is that every time we try expanding this formula with simple logarithm properties, such as log(a/b) = log(a)-log(b), log(a*b) = log(a)+log(b), log(a^b) = b*log(a) and lastly for sqrt log(sqrt(a)) = log(a^1/2) = 1/2 * log(a), we get a result where to dominating expression will be something with n*log(n) * constant. Now we know from the teacher that we have to find a linear lower bound, so this is wrong.
We have been using 2 days on this and are about to give up. Can anybody maybe help us?
Thanks in advance!

Related

base case and time complexity in recursive algorithms

I would like some clarification regarding O(N) functions. I am using SICP.
Consider the factorial function in the book that generates a recursive process in pseudocode:
function factorial1(n) {
if (n == 1) {
return 1;
}
return n*factorial1(n-1);
}
I have no idea how to measure the number of steps. That is, I don't know how "step" is defined, so I used the statement from the book to define a step:
Thus, we can compute n ! by computing (n-1)! and multiplying the
result by n.
I thought that is what they mean by a step. For a concrete example, if we trace (factorial 5),
factorial(1) = 1 = 1 step (base case - constant time)
factorial(2) = 2*factorial(1) = 2 steps
factorial(3) = 3*factorial(2) = 3 steps
factorial(4) = 4*factorial(3) = 4 steps
factorial(5) = 5*factorial(4) = 5 steps
I think this is indeed linear (number of steps is proportional to n).
On the other hand, here is another factorial function I keep seeing which has slightly different base case.
function factorial2(n) {
if (n == 0) {
return 1;
}
return n*factorial2(n-1);
}
This is exactly the same as the first one, except another computation (step) is added:
factorial(0) = 1 = 1 step (base case - constant time)
factorial(1) = 1*factorial(0) = 2 steps
...
Now I believe this is still O(N), but am I correct if I say factorial2 is more like O(n+1) (where 1 is the base case) as opposed to factorial1 which is exactly O(N) (including the base case)?
One thing to note is that factorial1 is incorrect for n = 0, likely underflowing and ultimately causing a stack overflow in typical implementations. factorial2 is correct for n = 0.
Setting that aside, your intution is correct. factorial1 is O(n) and factorial2 is O(n + 1). However, since the effect of n dominates over constant factors (the + 1), it's typical to simplify it by saying it's O(n). The wikipedia article on Big O Notation describes this:
...the function g(x) appearing within the O(...) is typically chosen to be as simple as possible, omitting constant factors and lower order terms.
From another perspective though, it's more accurate to say that these functions execute in pseudo-polynomial time. This means that it is polynomial with respect to the numeric value of n, but exponential with respect to the number of bits required to represent the value of n. There is an excellent prior answer that describes the distinction.
What is pseudopolynomial time? How does it differ from polynomial time?
Your pseudocode is still pretty vague as to the exact details of its execution. A more explicit one could be
function factorial1(n) {
r1 = (n == 1); // one step
if r1: { return 1; } // second step ... will stop only if n==1
r2 = factorial1(n-1) // third step ... in addition to however much steps
// it takes to compute the factorial1(n-1)
r3 = n * r2; // fourth step
return r3;
}
Thus we see that computing factorial1(n) takes four more steps than computing factorial1(n-1), and computing factorial1(1) takes two steps:
T(1) = 2
T(n) = 4 + T(n-1)
This translates roughly to 4n operations overall, which is in O(n). One step more, or less, or any constant number of steps (i.e. independent of n), do not change anything.
I would argue that no you would not be correct in saying that.
If something is O(N) then it is by definition O(N+1) as well as O(2n+3) as well as O(6N + -e) or O(.67777N - e^67). We use the simplest form out of convenience for notation O(N) however we have to be aware that it would be true to say that the first function is also O(N+1) and likewise the second is as much O(n) as it wasO(n+1)`.
Ill prove it. If you spend some time with the definition of big-O it isn't too hard to prove that.
g(n)=O(f(n)), f(n) = O(k(n)) --implies-> g(n) = O(k(n))
(Dont believe me? Just google transitive property of big O notation). It is then easy to see the below implication follows from the above.
n = O(n+1), factorial1 = O(n) --implies--> factorial1 = O(n+1)
So there is absolutely no difference between saying a function is O(N) or O(N+1). You just said the same thing twice. It is an isometry, a congruency, a equivalency. Pick your fancy word for it. They are different names for the same thing.
If you look at the Θ function you can think of them as a bunch of mathematical sets full of functions where all function in that set have the same growth rate. Some common sets are:
Θ(1) # Constant
Θ(log(n)) # Logarithmic
Θ(n) # Linear
Θ(n^2) # Qudratic
Θ(n^3) # Cubic
Θ(2^n) # Exponential (Base 2)
Θ(n!) # Factorial
A function will fall into one and exactly one Θ set. If a function fell into 2 sets then by definitions all functions in both sets could be proven to fall into both sets and you really just have one set. At the end of the day Θ gives us a perfect segmentation of all possible functions into set of countably infinite unique sets.
A function being in a big-O set means that it exists in some Θ set which has a growth rate no larger than the big-O function.
And thats why I would say you were wrong, or at least misguided to say it is "more O(N+1)". O(N) is really just a way of notating "The set of all functions that have growth rate equal to or less than a linear growth". And so to say that:
a function is more O(N+1) and less `O(N)`
would be equivalent to saying
a function is more "a member of the set of all functions that have linear
growth rate or less growth rate" and less "a member of the set of all
functions that have linear or less growth rate"
Which is pretty absurd, and not a correct thing to say.

differential equation VS Algorithms complexity

I don't know if it's the right place to ask because my question is about how to calculate a computer science algorithm complexity using differential equation growth and decay method.
The algorithm that I would like to prove is Binary search for a sorted array, which has a complexity of log2(n)
The algorithm says: if the target value are searching for is equal to the mid element, then return its index. If if it's less, then search on the left sub-array, if greater search on the right sub-array.
As you can see each time N(t): [number of nodes at time t] is being divided by half. Therefore, we can say that it takes O(log2(n)) to find an element.
Now using differential equation growth and decay method.
dN(t)/dt = N(t)/2
dN(t): How fast the number of elements is increasing or decreasing
dt: With respect to time
N(t): Number of elements at time t
The above equation says that the number of cells is being divided by 2 with time.
Solving the above equations gives us:
dN(t)/N(t) = dt/2
ln(N(t)) = t/2 + c
t = ln(N(t))*2 + d
Even though we got t = ln(N(t)) and not log2(N(t)), we can still say that it's logarithmic.
Unfortunately, the above method, even if it makes sense while approaching it to finding binary search complexity, turns out it does not work for all algorithms. Here's a counter example:
Searching an array linearly: O(n)
dN(t)/dt = N(t)
dN(t)/N(t) = dt
t = ln(N(t)) + d
So according to this method, the complexity of searching linearly takes O(ln(n)) which is NOT true of course.
This differential equation method is called growth and decay and it's very popluar. So I would like to know if this method could be applied in computer science algorithm like the one I picked, and if yes, what did I do wrong to get incorrect result for the linear search ? Thank you
The time an algorithm takes to execute is proportional to the number
of steps covered(reduced here).
In your linear searching of the array, you have assumed that dN(t)/dt = N(t).
Incorrect Assumption :-
dN(t)/dt = N(t)
dN(t)/N(t) = dt
t = ln(N(t)) + d
Going as per your previous assumption, the binary-search is decreasing the factor by 1/2 terms(half-terms are directly reduced for traversal in each of the pass of array-traversal,thereby reducing the number of search terms by half). So, your point of dN(t)/dt=N(t)/2 was fine. But, when you are talking of searching an array linearly, obviously, you are accessing the element in one single pass and hence, your searching terms are decreasing in the order of one item in each of the passes. So, how come your assumption be true???
Correct Assumption :-
dN(t)/dt = 1
dN(t)/1 = dt
t = N(t) + d
I hope you got my point. The array elements are being accessed sequentially one pass(iteration) each. So, the array accessing is not changing in order of N(t), but in order of a constant 1. So, this N(T) order result!

How do you calculate big O on a function with a hard limit?

As part of a programming assignment I saw recently, students were asked to find the big O value of their function for solving a puzzle. I was bored, and decided to write the program myself. However, my solution uses a pattern I saw in the problem to skip large portions of the calculations.
Big O shows how the time increases based on a scaling n, but as n scales, once it reaches the resetting of the pattern, the time it takes resets back to low values as well. My thought was that it was O(nlogn % k) when k+1 is when it resets. Another thought is that as it has a hard limit, the value is O(1), since that is big O of any constant. Is one of those right, and if not, how should the limit be represented?
As an example of the reset, the k value is 31336.
At n=31336, it takes 31336 steps but at n=31337, it takes 1.
The code is:
def Entry(a1, q):
F = [a1]
lastnum = a1
q1 = q % 31336
rows = (q / 31336)
for i in range(1, q1):
lastnum = (lastnum * 31334) % 31337
F.append(lastnum)
F = MergeSort(F)
print lastnum * rows + F.index(lastnum) + 1
MergeSort is a standard merge sort with O(nlogn) complexity.
It's O(1) and you can derive this from big O's definition. If f(x) is the complexity of your solution, then:
with
and with any M > 470040 (it's nlogn for n = 31336) and x > 0. And this implies from the definition that:
Well, an easy way that I use to think about big-O problems is to think of n as so big it may as well be infinity. If you don't get particular about byte-level operations on very big numbers (because q % 31336 would scale up as q goes to infinity and is not actually constant), then your intuition is right about it being O(1).
Imagining q as close to infinity, you can see that q % 31336 is obviously between 0 and 31335, as you noted. This fact limits the number of array elements, which limits the sort time to be some constant amount (n * log(n) ==> 31335 * log(31335) * C, for some constant C). So it is constant time for the whole algorithm.
But, in the real world, multiplication, division, and modulus all do scale based on input size. You can look up Karatsuba algorithm if you are interested in figuring that out. I'll leave it as an exercise.
If there are a few different instances of this problem, each with its own k value, then the complexity of the method is not O(1), but instead O(k·ln k).

Why this expression related to algorithm cost has this result?

Hi and sorry for my bad English.
I'm studying computer science and I didn't understand why this expression (in the image) has this result.
Tmedio is the "medium" cost of a linear search algorithm, according to my mind and to the definition of summatory, if for example n = 4, the result should be like: (1/4)*(1+2+3+4)... What am I doing wrong?
The sum of first n numbers is n*(n+1)/2. Hence you get (1/n) * n * (n+1)/2 = (n+1)/2.
See the wiki page related to this identity here: http://en.wikipedia.org/wiki/1_%2B_2_%2B_3_%2B_4_%2B_%E2%8B%AF

understanding the mathematical algorithm behind the TPRODUCT problem

I've been trying to solve the codechef problem: http://www.codechef.com/MAY11/problems/TPRODUCT/
They have given the post-contest analysis here: http://www.codechef.com/wiki/may-2011-contest-problem-editorials
I need some help in understanding the logic discussed there:
They are talking about using logarithm in place of the function
Pi=max(Vi*PL, Vi*PR)
Math is not my strong area. [I've been trying to improve by participating in contests like this]. If someone can give a very dumbed down explanation for this problem, it would be helpful for mortals like me. Thanks.
One large problem with multiplication is that numbers get very large very fast, and there are issues with reaching the upper bounds of an int or long, and spilling over to the negatives. The logarithm allows us to keep the computations small, and then get the answer back modulo n.
In retracing the result found via dynamic programming, the naive solution is to multiply all the values together and then mod:
(x0 * x1 * x2 * ... * xk) (mod n)
this is replaced with a series of smaller computations, which avoid bound overflow:
z1 = e^(log(x0) + log(x1)) modulo n
z2 = e^(log(x2) + log(z1)) modulo n
...
zk = e^(log(xk) + log(z{k-1})) modulo n
and then zk contains the result.
Presumably, they are relying on the simple mathematical observation that if:
z = y * x
then:
log(z) = log(y) + log(x)
Thus turning multiplications into additions.

Resources