Say I have k for loops nested in the following fashion:
for a = 1 to n:
for b = 1 to n-a:
for c = 1 to n-a-b:
for d = 1 to n-a-b-c:
O(1)
for any arbitrary k, but all k of these loops "share" the limit of n iterations with each other, is the big-O complexity still O(n^k)? Or is it some order below that?
Edit: What is the Big-O of a nested loop, where number of iterations in the inner loop is determined by the current iteration of the outer loop? is indeed asking something similar but it wasn't asking (nor does the answer address) if additional levels of nesting will change anything.
Dmitry's answer explains it very well for me.
OK, let's sum them all up: using induction you can find out that numbers of loops (for large n > k) are:
1. n
2. n * (n - 1) / 2
3. n * (n - 1) * (n - 2) / 6
...
k. n * (n - 1) * ... * (n - k + 1) / k!
...
As you can see the complexity is O(n**k) as you've conjured for any k providing that n is large enough (n > k).
Related
I'm trying to figure out this time complexity:
for(i=0; i<=(n/2)-1; i++){
for (j=i+1; j<=(n/2)-1; j++){
--some O(1) thing--
}
}
The outer loop I understand to be on its own O(n/2.) However with the inner loop as well I can't wrap my brain around how to break down how many times O(1) executes.
If the inner one stared j=0 I could do n/2(inner) * n/2(outer) = O(n^2) time complexity right? However since j depends on i, I'm thinking some type of summation is involved from i+1 to n/2 but i can't figure out how to set it up...
Basically I need help kind of visualizing how many times it loops, and how to set the summation up. Thank you! :)
Assuming that m = n/2. You will see that in inner loop, j will iterater over range m-1, m-2, m-3,... 1. Summing all of that will be 1+2+..+m-1 = (m-1)*m/2 = O(m^2)
Premise
For simplicity, let us call m = n / 2 - 1. The outer loop runs from 0 to m. The inner loop from i + 1 to m.
Iteration counting
We need to count how often the inner statement which you labeled O(1) is executed. That is, how often the inner loop runs in total, as executed by the outer loop. So let us take a look.
The first iteration of the outer loop generates m - 1 iterations of the inner loop. The second generates m - 1, then m - 2, m - 3, m - 4, ..., 2, 1, 0.
That means that the O(1) statement is, in total, executed:
(m - 1) + (m - 2) + (m - 3) + ... + 2 + 1 + 0
That is the sum from 0 up to m - 1
sum_{i = 0}^{m - 1} i
which can be simplified to
(m^2 - m) / 2
Substitute back
Let us now substitute back m = n / 2 - 1, we get
((n / 2 - 1)^2 - (n / 2 - 1)) / 2
After simplifying, this is
n^2/8 - 3n/4 + 1
Big-O
For Big-O, we observe that it is smaller than
n^2 - 0 + n^2
= 2n^2
Which, by definition is O(n^2).
As you see, this bound is also tight. So we also receive Omega(n^2) which also concludes Theta(n^2).
I've been trying to set up clear information on iterating through and i and then j, but i get stuck when trying to make sense of the while loop?
Can someone please give me some information on how to solve something like this please?
Disclaimer: this answer is long and overly verbose because I wanted to provide the OP with a "baby-steps" method rather than a result. I hope she can find some help from it - would it be needed.
If you get stuck when trying to derive the complexity in one go, you can try to breakdown the problem into smaller pieces easier to reason about.
Introducing notations can help in this context to structure your thought process.
Let's introduce a notation for the inner while loop. We can see the starting index j depends on n - i, so the number of operations performed by this loop will be a function of n and i. Let's represent this number of operations by G(i, n).
The outer loop depends only on n. Let's represent the number of operations by T(n).
Now, let's write down the dependency between T(n) and G(n, i), reasoning about the outer loop only - we do not care about the inner while loop for this step because we have already abstracted its complexity in the function G. So we go with:
T(n) = G(n, n - 1) + G(n, n - 2) + G(n, n - 3) + ... + G(n, 0)
= sum(k = 0, k < n, G(n, k))
Now, let's focus on the function G.
Let's introduce an additional notation and write j(t) the value of the index j at the t-th iteration performed by the while loop.
Let's call k the value of t for which the invariant of the while loop is breached, i.e. the last time the condition is evaluated.
Let's consider an arbitrary i. You could try with a couple of specific values of i (e.g. 1, 2, n) if this helps.
We can write:
j(0) = n - i
j(1) = n - i - 3
j(2) = n - i - 6
...
j(k - 1) = n - i - 3(k - 1) such that j(k-1) >= 0
j(k) = n - i - 3k such that j(k) < 0
Finding k involves solving the inequality n - 1 - 3k < 0. To make it easier, let's "ignore" the fact that k is an integer and that we need to take the integer part of the result below.
n - i - 3k < 0 <=> k = (n - i) / 3
So there are (n - i) / 3 "steps" to consider. By steps we refer here to the number of evaluation of the loop condition. The number of times the operation j <- j - 3 is performed would be the latter minus one.
So we found an expression for G(n, i):
G(n, i) = (n - i) / 3
Now let's get back to the expression of T(n) we found in (3):
T(n) = sum(k = 0, k < n, (n - k) / 3)
Since when k varies from 0 to n, n - k varies from n to 0, we can equivalently write T(n) as:
T(n) = sum(k = 0, k <= n, k / 3)
= (1/3).sum(k = 0, j <= n, k)
= (1/6).n.(n+1)
And you can therefore conclude with
T(n) = Theta(n^2)
This resolution exhibited some patterns from which you can create your own recipe to solve similar exercises:
Consider the number of operations at individual levels (one loop at a time) and introduce notations for the functions representing them;
Find relationships between these functions;
Find an algebraic expression of the most inner functions, which doesn't depend on other functions you introduced and for which the number of steps can be calculated from a loop invariant;
Using the relationships between these functions, find expressions for the functions higher in the stack.
In order to calculate all time-complexity of code, replace each loop with a summation. Moreover, consider that the second loop run (n - i)/3 since j decreases with step size of 3. So we have:
Interview questions where I start with "this might be solved by generating all possible combinations for the array elements" are usually meant to let me find something better.
Anyway I would like to add "I would definitely prefer another solution since this is O(X)".. the question is: what is the O(X) complexity of generating all combinations for a given set?
I know that there are n! / (n-k)!k! combinations (binomial coefficients), but how to get the big-O notation from that?
First, there is nothing wrong with using O(n! / (n-k)!k!) - or any other function f(n) as O(f(n)), but I believe you are looking for a simpler solution that still holds the same set.
If you are willing to look at the size of the subset k as constant, then for k <= n - k:
n! / ((n - k)! k!) = ((n - k + 1) (n - k + 2) (n - k + 3) ... n ) / k!
But the above is actually (n ^ k + O(n ^ (k - 1))) / k!, which is in O(n ^ k)
Similarly, if n - k < k, you get O(n ^ (n - k))
Which gives us O(n ^ min{k, n - k})
I know this is an old question, but it comes up as a top hit on google, and IMHO has an incorrectly marked accepted answer.
C(n,k) = n Choose k = n! / ( (n-k)! * k!)
The above function represents the number of sets of k-element that can be made from a set of n-element. Purely from a logical reasoning perspective, C(n, k) has to be smaller than
∑ C(n,k) ∀ k ∊ (1..n).
as this expression represents the power-set. In English, the above expression represents: add C(n,k) for all k from 1 to n. We know this to have 2 ^ n elements.
So, C(n, k) has an upper bound of 2 ^ n which is definitely smaller than n ^ k for any n, k > 3, and k < n.
So to answer your question C(n, k) has an upper bound of 2 ^ n for sure, but don't know if there is a tighter upper bound that describes it better.
As a follow-up to #amit, an upper bound of min{k, n - k} is n / 2.
Therefore, the upper-bound for "n choose k" complexity is O(n ^ (n / 2))
case1: if n-k < k
Let suppose n=11 and k=8 and n-k=3 then
n!/(n-k)!k! = 11!/(3!8!)= 11x10x9/3!
let suppose it is (11x11x11)/6 = O(11^3) and 11 was equal to n so O(n^3) and also n-k=3 so it become O(n^(n-k))
case2: if k < n-k
Let suppose n=11 and k=3 and n-k=8 then
n!/(n-k)!k! = 11!/(8!3!)= 11x10x9/3!
let suppose it is (11x11x11)/6 = O(11^3) and 11 was equal to n so O(n^3) and also k=3 so it become O(n^(k))
Which gives us O(n^min{k,n-k})
Pseudo code :
s <- 0
for i=1 to n do
if A[i]=1 then
for j=1 to n do
{constant number of elementary operations}
endfor
else
s <- s + a[i]
endif
endfor
where A[i] is an array of n integers, each of which is a random value between 1 and 6.
I'm at loss here...picking from my notes and some other sources online, I get
T(n) = C1(N) + C2 + C3(N) + C4(N) + C5
where C1(N) and C3(N) = for loops, and C4(N) = constant number of elementary operations. Though I have a strong feeling that I'm wrong.
You are looping from 1..n, and each loop ALSO loops from 1..n (in the worst case). O(n^2) right?
Put another way: You shouldn't be adding C3(N). That would be the case if you had two independent for loops from 1..n. But you have a nested loop. You will run the inner loop N times, and the outer loop N times. N*N = O(n^2).
Let's think for a second about what this algorithm does.
You are basically looping through every element in the array at least once (outer for, the one with i). Sometimes, if the value A[i] is 1, you are also looping again through the whole loop with the j for.
In your worst case scenario, you are running against an array of all 1's.
In that case, your complexity is:
time to init s
n * (time to test a[i] == 1 + n * time of {constant ...})
Which means: T = T(s) + n * (T(test) + n * T(const)) = n^2 * T(const) + n*T(test) + T(s).
Asymptotically, this is a O(n^2).
But this was a worst-case analysis (the one you should perform most of the times). What if we want an average case analysis?
In that case, assuming an uniform distribution of values in A, you are going to access the for loop of j, on average, 1/6 of the times.
So you'd get:
- time to init s
- n * (time to test a[i] == 1 + 1/6 * n * time of {constant ...} + 5/6 * T(increment s)
Which means: T = T(s) + n * (T(test) + 1/6 * n * T(const) + 5/6 * T(inc s)) = 1/6* n^2 * T(const) + n * (5/6 * T(inc s) + T(test)) + T(s).
Again, asymptotically this is still O(n^2), but according to the value of T(inc s) this could be larger or lower than the other case.
Fun exercise: can you estimate the expected average run time for a generic distribution of values in A, instead of an uniform one?
Many algorithms have loops in them that look like this:
for a from 1 to n
for b from 1 to a
for c from 1 to b
for d from 1 to c
for e from 1 to d
...
// Do O(1) work
In other words, the loop nest is k layers deep, the outer layer loops from 1 to n, and each inner layer loops up from 1 to the index above it. This shows up, for example, in code to iterate over all k-tuples of positions inside an array.
Assuming that k is fixed, is the runtime of this code always Θ(nk)? For the special case where n = 1, the work is Θ(n) because it's just a standard loop over an array, and for the case where n = 2 the work is Θ(n2) because the work done by the inner loop is given by
0 + 1 + 2 + ... + n-1 = n(n-1)/2 = Θ(n2)
Does this pattern continue when k gets large? Or is it just a coincidence?
Yes, the time complexity will be Θ(nk). One way to measure the complexity of this code is to look at what values it generates. One particularly useful observation is that these loops will iterate over all possible k-element subsets of the array {1, 2, 3, ..., n} and will spend O(1) time producing each one of them. Therefore, we can say that the runtime is given by the number of such subsets. Given an n-element set, the number of k-element subsets is n choose k, which is equal to
n! / k!(n - k)!
This is given by
n (n-1)(n-2) ... (n - k + 1) / k!
This value is certainly no greater than this one:
n · n · n · ... · n / k! (with k copies of n)
= nk / k!
This expression is O(nk), since the 1/k! term is a fixed constant.
Similarly, when n - k + 1 ≥ n / 2, this expression is greater than or equal to
(n / 2) · (n / 2) · ... · (n / 2) / k! (with k copies of n/2)
= nk / k! 2k
This is Ω(nk), since 1 / k! 2k is a fixed constant.
Since the runtime is O(nk) and Ω(nk), the runtime is Θ(nk).
Hope this helps!
You may consume the following equation:
Where c is the number of constant time operations inside the innermost loop, n is the number of elements, and r is the number of nested loops.