Running time complexity - algorithm

For functions f(n) : n! , n^2 and n ..
If a problem can be solved in 1 second, given that the algorithm to
solve the problem takes f(n)microsecond.
I know for a fact n! = 9 in one second. but I don't know how this been calculated. Can someone explain to me how these functions were calculated?

From what I understand, you are being asked "when should I use which", should I use an algorithm that takes a constant time of 1 second? or should I use an algorithm that takes f(n) microseconds.
Note that 1 second = 10^6 microseconds, so you are solving:
f(n) <= 1,000,000
where n is natural.
By assigning f(10), you can see f(10) = 3,628,800 > 10^6.
But f(9) = 362880 < 10^6
So, for f(n)=n!, highest number that you want to use the f(n) algorithm is n=9.
Do similarly for other candidates and you will get your answer for them as well.
(Hint: solve the equation f(n) = 10^6, and have a look what happens in the near proximity of the n you found).

Related

calculate n for nlog(n) and n! when time is 1 second. (algorithm takes f(n) microseconds)

given the following problem from CLRS algo book.
For each function f (n) and time t in the following table, determine
the largest size n of a problem that can be solved in time t, assuming
that the algorithm to solve the problem takes f(n) microseconds.
how can one calculate n for f(n)=nlog(n) when time is 1 second?
how can one calculate n for f(n)=n! when time is 1 second?
It is mentioned that the algorithm takes f(n) microseconds. Then, one may consider that algorithm to consist of f(n) steps each of which takes 1 microsecond.
The questions given state that relevant f(n) values are bound by 1 second. (i.e. 106 microseconds) Then, since you are looking for the largest n possible to fulfill those conditions, your questions boil down to the inequalities given below.
1) f(n) = nlog(n) <= 106
2) f(n) = n! <= 106
The rest, I believe, is mainly juggling with algebra and logarithmic equations to find the relevant values.
In first case, You can refer to Example of newtons method to calculate cube root Newton’s method to approximate the roots or Lambert W Function. It may help to calculate value of n. As per the my findings mostly there is no other analytical approach can help.
In second case, python script can help to calculate n with manual approch.
def calFact(n):
if(n == 0 or n==1):
return n
return n*calFact(n-1)
nVal=1
while(calFact(nVal)<1000000): # f(n) = n! * 10^-6 sec
nVal=nVal+1 # 10^6 = n!
print(nVal)
So in this case we are trying to find out n such that n! is equal to or near to 10^6.

Which Big-O grows faster asymptotically

I have gotten into an argument/debate recently and I am trying to get a clear verdict of the correct solution.
It is well known that n! grows very quickly, but exactly how quickly, enough to "hide" all additional constants that might be added to it?
Let's assume I have this silly & simple program (no particular language):
for i from 0 to n! do:
; // nothing
Given that the input is n, then the complexity of this is obviously O(n!) (or even ϴ(n!) but this isn't relevant here).
Now let's assume this program:
for i from 0 to n do:
for j from 0 to n do:
for k from 0 to n! do:
; // nothing
Bob claims: "This program's complexity is obviously O(n)O(n)O(n!) = O(n!n^2) = O((n+2)!)."
Alice responds: "I agree with you bob, but actually it would be sufficient if you said that the complexity is O(n!) since O(n!n^k) = O(n!) for any k >= 1 constant."
Is Alice right in her note of Bob's analysis?
Alice is wrong, and Bob is right.
Recall an equivalent definition to big O notation when using limit:
f(n) is in O(g(n)) iff
lim_n->infinity: f(n)/g(n) < infinity
For any k>0:
lim_n->infinity: (n!*n^k) / n! = lim_n->infinity n^k = infinity
And thus, n!*n^k is NOT in O(n!)
Amit solution is perfect, I would only add more "human" solution, because understanding definition can be difficult for beginners.
The definition basically says - if you are increasing the value n and the methods f(n) and g(n) differs "only" k-times, where k is constant and does not change (for example g(n) is always ~100times higher, no matter if n=10000 or n=1000000), then these functions have same complexity.
If the g(n) is 100times higher for n=10000 and 80times higher for n=1000000, then f(n) has higher complexity! Because as the n grows and grows, the f(n) would eventually at some point reach the g(n) and then it will grow more and more compare to g(n). In complexity theories, you are interested in, how it will end in "infinity" (or more imaginable extremely HIGH values of n).
if you compare n! and n!*n^2, you can see, that for n=10, the second function has 10^2=100 times higher value. For n=1000, it has 1000^2=1000000 times higher value. And as you can imagine, the difference will grow.

Intro to Algorithms (chapter 1-1)

Just reading this book for fun, this isn't homework.
However I am already confused on the first main assignment:
1-1 Comparison of running times
For each function f(n) and time t in the following table, determine the largest size n of a problem that can be solved in time t, assuming that the algorithm to solve the problem takes f(n) microseconds.
What does this even mean?
The next table shows a bunch of times along one axis (1 second, 1 minute, one hour, etc), and the other axis shows different f(n) such as lg n, sqrt(n), n, etc.
I am not sure how to fill in the matrix because I can't understand the question. So if f(n) = lg n, it's asking the largest n that can be solved in, for example, 1 second, but the problem takes f(n) = lg n microseconds to solve? What does that last part even mean? I don't even know how to set up the equations / ratios to solve this problem because I literally can't even put together the meaning of the question.
My hangup is over the sentence "assuming that the algorithm to solve the problem takes f(n) microseconds" because I don't know what this refers to. The time for what algorithm to solve what problem takes f(n) microseconds? So if I call f(100) it'll take lg 100 microseconds? So I need to find some n where f(n) = lg n microseconds = 1 second?
Does this mean lg n microseconds = 1 second when lg n microseconds = 10^6 microseconds, so n = 2^(10^6)?
For each time T, and each function f(n), you are required to find the maximal integer n such that f(n) <= T
For example, f(n) = n^2, T=1Sec = 1000 ms:
n^2 <= 1000
n <= sqrt(1000)
n <= ~31.63 <- not an integer
n <= 31
Given any function f(n), and some time T, you are required to similarly find the maximal value of n, and fill in the table.
I will do the first two as an example to help you do the rest. Since a second is 10^6 microseconds. By solving an equation which relates f(n) to the time we are plotting for f(n) to run we can solve for the largest input n that f can run on within the time limit.
1 second:
log(n2)=1,000,000⟹n2=e1,000,000⟹n=e500,000
1 minute:
log(n2)=60,000,000⟹n2=e60,000,000⟹n=e30,000,000
the rest can be similarly done.
P.S. make sure to floor the values of n you get from these equations because n is an integer length input.

Are 2^n and n*2^n in the same time complexity?

Resources I've found on time complexity are unclear about when it is okay to ignore terms in a time complexity equation, specifically with non-polynomial examples.
It's clear to me that given something of the form n2 + n + 1, the last two terms are insignificant.
Specifically, given two categorizations, 2n, and n*(2n), is the second in the same order as the first? Does the additional n multiplication there matter? Usually resources just say xn is in an exponential and grows much faster... then move on.
I can understand why it wouldn't since 2n will greatly outpace n, but because they're not being added together, it would matter greatly when comparing the two equations, in fact the difference between them will always be a factor of n, which seems important to say the least.
You will have to go to the formal definition of the big O (O) in order to answer this question.
The definition is that f(x) belongs to O(g(x)) if and only if the limit limsupx → ∞ (f(x)/g(x)) exists i.e. is not infinity. In short this means that there exists a constant M, such that value of f(x)/g(x) is never greater than M.
In the case of your question let f(n) = n ⋅ 2n and let g(n) = 2n. Then f(n)/g(n) is n which will still grow infinitely. Therefore f(n) does not belong to O(g(n)).
A quick way to see that n⋅2ⁿ is bigger is to make a change of variable. Let m = 2ⁿ. Then n⋅2ⁿ = ( log₂m )⋅m (taking the base-2 logarithm on both sides of m = 2ⁿ gives n = log₂m ), and you can easily show that m log₂m grows faster than m.
I agree that n⋅2ⁿ is not in O(2ⁿ), but I thought it should be more explicit since the limit superior usage doesn't always hold.
By the formal definition of Big-O: f(n) is in O(g(n)) if there exist constants c > 0 and n₀ ≥ 0 such that for all n ≥ n₀ we have f(n) ≤ c⋅g(n). It can easily be shown that no such constants exist for f(n) = n⋅2ⁿ and g(n) = 2ⁿ. However, it can be shown that g(n) is in O(f(n)).
In other words, n⋅2ⁿ is lower bounded by 2ⁿ. This is intuitive. Although they are both exponential and thus are equally unlikely to be used in most practical circumstances, we cannot say they are of the same order because 2ⁿ necessarily grows slower than n⋅2ⁿ.
I do not argue with other answers that say that n⋅2ⁿ grows faster than 2ⁿ. But n⋅2ⁿ grows is still only exponential.
When we talk about algorithms, we often say that time complexity grows is exponential.
So, we consider to be 2ⁿ, 3ⁿ, eⁿ, 2.000001ⁿ, or our n⋅2ⁿ to be same group of complexity with exponential grows.
To give it a bit mathematical sense, we consider a function f(x) to grow (not faster than) exponentially if exists such constant c > 1, that f(x) = O(cx).
For n⋅2ⁿ the constant c can be any number greater than 2, let's take 3. Then:
n⋅2ⁿ / 3ⁿ = n ⋅ (2/3)ⁿ and this is less than 1 for any n.
So 2ⁿ grows slower than n⋅2ⁿ, the last in turn grows slower than 2.000001ⁿ. But all three of them grow exponentially.
You asked "is the second in the same order as the first? Does the additional n multiplication there matter?" These are two different questions with two different answers.
n 2^n grows asymptotically faster than 2^n. That's that question answered.
But you could ask "if algorithm A takes 2^n nanoseconds, and algorithm B takes n 2^n nanoseconds, what is the biggest n where I can find a solution in a second / minute / hour / day / month / year? And the answers are n = 29/35/41/46/51/54 vs. 25/30/36/40/45/49. Not much difference in practice.
The size of the biggest problem that can be solved in time T is O (ln T) in both cases.
Very Simple answer is 'NO'
see 2^n and n.2^n
as seen n.2^n > 2^n for any n>0
or you can even do it by applying log on both sides then you get
n.log(2) < n.log(2) + log(n)
hence by both type of analysis that is by
substituting a number
using log
we see that n.2^n is greater than 2^n as visibly seen
so if you get a equation like
O ( 2^n + n.2^n ) which can be replaced as O ( n.2^n)

Prove 3-Way Quicksort Big-O Bound

For 3-way Quicksort (dual-pivot quicksort), how would I go about finding the Big-O bound? Could anyone show me how to derive it?
There's a subtle difference between finding the complexity of an algorithm and proving it.
To find the complexity of this algorithm, you can do as amit said in the other answer: you know that in average, you split your problem of size n into three smaller problems of size n/3, so you will get, in è log_3(n)` steps in average, to problems of size 1. With experience, you will start getting the feeling of this approach and be able to deduce the complexity of algorithms just by thinking about them in terms of subproblems involved.
To prove that this algorithm runs in O(nlogn) in the average case, you use the Master Theorem. To use it, you have to write the recursion formula giving the time spent sorting your array. As we said, sorting an array of size n can be decomposed into sorting three arrays of sizes n/3 plus the time spent building them. This can be written as follows:
T(n) = 3T(n/3) + f(n)
Where T(n) is a function giving the resolution "time" for an input of size n (actually the number of elementary operations needed), and f(n) gives the "time" needed to split the problem into subproblems.
For 3-Way quicksort, f(n) = c*n because you go through the array, check where to place each item and eventually make a swap. This places us in Case 2 of the Master Theorem, which states that if f(n) = O(n^(log_b(a)) log^k(n)) for some k >= 0 (in our case k = 0) then
T(n) = O(n^(log_b(a)) log^(k+1)(n)))
As a = 3 and b = 3 (we get these from the recurrence relation, T(n) = aT(n/b)), this simplifies to
T(n) = O(n log n)
And that's a proof.
Well, the same prove actually holds.
Each iteration splits the array into 3 sublists, on average the size of these sublists is n/3 each.
Thus - number of iterations needed is log_3(n) because you need to find number of times you do (((n/3) /3) /3) ... until you get to one. This gives you the formula:
n/(3^i) = 1
Which is satisfied for i = log_3(n).
Each iteration is still going over all the input (but in a different sublist) - same as quicksort, which gives you O(n*log_3(n)).
Since log_3(n) = log(n)/log(3) = log(n) * CONSTANT, you get that the run time is O(nlogn) on average.
Note, even if you take a more pessimistic approach to calculate the size of the sublists, by taking minimum of uniform distribution - it will still get you first sublist of size 1/4, 2nd sublist of size 1/2, and last sublist of size 1/4 (minimum and maximum of uniform distribution), which will again decay to log_k(n) iterations (with a different k>2) - which will yield O(nlogn) overall - again.
Formally, the proof will be something like:
Each iteration takes at most c_1* n ops to run, for each n>N_1, for some constants c_1,N_1. (Definition of big O notation, and the claim that each iteration is O(n) excluding recursion. Convince yourself why this is true. Note that in here - "iteration" means all iterations done by the algorithm in a certain "level", and not in a single recursive invokation).
As seen above, you have log_3(n) = log(n)/log(3) iterations on average case (taking the optimistic version here, same principles for pessimistic can be used)
Now, we get that the running time T(n) of the algorithm is:
for each n > N_1:
T(n) <= c_1 * n * log(n)/log(3)
T(n) <= c_1 * nlogn
By definition of big O notation, it means T(n) is in O(nlogn) with M = c_1 and N = N_1.
QED

Resources