Discrete Mathematics Big-O notation Algorithm Complexity - algorithm

I can probably figure out part b if you can help me do part a. I've been looking at this and similar problems all day, and I'm just having problems grasping what to do with nested loops. For the first loop there are n iterations, for the second there are n-1, and for the third there are n-1.. Am I thinking about this correctly?
Consider the following algorithm,
which takes as input a sequence of n integers a1, a2, ..., an
and produces as output a matrix M = {mij}
where mij is the minimum term
in the sequence of integers ai, a + 1, ..., aj for j >= i and mij = 0 otherwise.
initialize M so that mij = ai if j >= i and mij = 0
for i:=1 to n do
for j:=i+1 to n do
for k:=i+1 to j do
m[i][j] := min(m[i][j], a[k])
end
end
end
return M = {m[i][j]}
(a) Show that this algorithm uses Big-O(n^3) comparisons to compute the matrix M.
(b) Show that this algorithm uses Big-Omega(n^3) comparisons to compute the matrix M.
Using this face and part (a), conclude that the algorithm uses Big-theta(n^3) comparisons.

In part A, you need to find an upper bound for the number of min ops.
In order to do so, it is clear that the above algorithm has less min ops then the following:
for i=1 to n
for j=1 to n //bigger range then your algorithm
for k=1 to n //bigger range then your algorithm
(something with min)
The above has exactly n^3 min ops - thus in your algorithm, there are less then n^3 min ops.
From this we can conclude: #minOps <= 1 * n^3 (for each n > 10, where 10 is arbitrary).
By definition of Big-O, this means the algorithm is O(n^3)
You said you can figure B alone, so I'll let you try it :)
hint: the middle loop has more iterations then for j=i+1 to n/2

For each iteration of outer loop inner two nested loop would give n^2 complexity if i == n. Outer loop will run for i = 1 to n. So total complexity would be a series like: 1^2 + 2^2 + 3^2 + 4^2 + ... ... ... + n^2. This summation value is n(n+1)(2n+1)/6. Ignoring lower order terms of this summation term ultimately the order would be O(n^3)

Related

Finding the Big O of psuedocode

Here's the code:
y = 0
for j=0 to n:
for k=0 to (j*n):
y+=2
My logic is that the inner for loop will have this summation given the known solution of sum of i from 0 to n which n(n+1)/2:
(j*n)(j*n + 1)/2 #in this case, j*n is what we're summing to
Then, this inner loop would be looped from j=0 to n, which by that logic allows me to sum that from 0 to n:
( (n(n+1)/2) * n)((n(n+1)/2) * n + 1) / 2
Where I subbed j for (n(n+1)/2). After doing the multiplications I end up with
O(n^6)
I can't tell if my logic is sound or if I'm missing something because that number seems big. Thanks.
We can make a back of the envelope calculation.
j is ranging from 0 to n. So, the highest number for j is n. That is the absolute worst case for the inner loop.
So, the absolute worst case for the inner loop is if j == n, in which case the loop has j * n == n * n == n² iterations.
Meaning, the inner loop will in the absolute worst case have n² iterations. The outer loop, in turn, has n iterations, which means that our over-estimated, absolute worst-case upper bound is O(n³). It can't be worse than that. In fact, we have over-estimated by assuming that j * n == n², so we know it must definitely be less than n³.
Now, we can try to find an even more exact bound. In fact, we can actually find an exact number of iterations, we don't even need Bachmann-Landau notation.
Under the assumption that the loop bounds are exclusive, the statement in the inner loop will be executed (n³ - n²) / 2 times, and y will be n³ - n². (Says Wolfram Alpha.)

How to analize the complexity of this Algorithm? In term of T(n) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Analyze the complexity of the following algorithms. Said T(n) the running time of the algorithm, determine a function f (n) such that T(n) = O(f(n)). Also, let's say if it also applies T(n) = Θ(f(n)). The answers must be motivated.
I never do this kind of exercise.
Could someone explain what I have to analyze and how can I do it?
j=1,k=0;
while j<=n do
for l=1 to n-j do
k=k+j;
end for
j=j*4;
end while
Thank you.
Step 1
Following on from the comments, the value of j can be written as a power of 4. Therefore the code can be re-written in the following way:
i=0,k=0; // new loop variable i
while (j=pow(4,i)) <= n do // equivalent loop condition which calculates j
for l=1 to n-j do
k=k+j;
end for
i=i+1; // equivalent to j=j*4
end while
The value of i increases as 0, 1, 2, 3, 4 ..., and the value of j as 1, 4, 16, 64, 256 ... (i.e. powers of 4).
Step 2
What is the maximum value of i, i.e. how many times does the outer loop run? Inverting the equivalent loop condition:
pow(4,i) <= n // loop condition inequality
--> i <= log4(n) // take logarithm base-4 of both sides
--> max(i) = floor(log4(n)) // round down
Now that the maximum value of i is known, it's time to re-write the code again:
i=0,k=0;
m=floor(log4(n)) // maximum value of i
while i<=m do // equivalent loop condition in terms of i only
j=pow(4,i) // value of j for each i
for l=1 to n-j do
k=k+j;
end for
i=i+1;
end while
Step 3
You have correctly deduced that the inner loop runs for n - j times for every outer loop. This can be summed over all values of j to give the total time complexity:
j≤n
T(n) = ∑ (n - j)
j
i≤m
= ∑ (n - pow(4,i)) // using the results of steps 1) and 2)
i=0
i≤m
= (m+1)*n - ∑ pow(4,i) // separate the sum into two parts
i=0
\_____/ \_________/
A B
The term A is obviously O(n log n), because m=floor(log4(n)). What about B?
Step 4
B is a geometric series, for which there is a standard formula (source – Wikipedia):
Substituting the relevant numbers "a" = 1, "n" = m+1, "r" = 4:
B = (pow(4,m+1) - 1) / (4 - 1)
= 3 * pow(4, floor(log4(n))+1) - 3
If a number is rounded down (floor), the result is always greater than the original value minus 1. Therefore m can be asymptotically written as:
m = log4(n) + O(1)
--> B = 3 * pow(4, log4(n) + O(1)) - 3
= 3 * pow(4, O(1)) * n - 3
----------------
this is O(1)
= O(n)
Step 5
A = O(n log n), B = O(n), so asymptotically A overshadows B.
The total time complexity is O(n log n).
Consider the number of times each instruction is executed depending on n (the variable input). Let's call that the cost of each instruction. Typically, some parts of the algorithm are run a significantly greater number of times more often than other parts. Also typically, this "significantly greater number" is such that it asymptotically dominates all others, meaning that as n grows larger, the cost of all other instructions become negligible. Once you understand that, you simply have to figure out the cost of the significant instruction, or at least what it is proportional to.
In your example, two instructions are potentially costly; let k=k+j; cost x, and j=j*4; cost y.
j=1,k=0; // Negligible
while j<=n do
for l=1 to n-j do
k=k+j; // Run x times
end for
j=j*4; // Run y times
end while
Being tied to only one loop, y is easier to determine. The outer loop runs for j from 1 to n, with j growing exponentially: its value follows the sequence [1, 4, 16, 64, ...] (the i-th term is 4^i, starting at 0). That simply means that y is proportional to the logarithm of n (of any base, because all logarithms are proportional). So y = O(log n).
Now for x: we know it is a multiple of y since it is tied to an inner loop. For each time the outer loop runs, this inner loop runs for l from 1 to n-j, with l growing linearly (it's a for loop). That means it simply runs n-j-1 times, or n-1 - 4^i with i being the index of the current outer loop, starting at 0.
Since y = O(log n), x is proportional to the sum of n - 1 - 4^i, for i from 0 to log n, or
(n-1 - 4^0) + (n-1 - 4^1) + (n-1 - 4^2) + ... =
((log n)-1) * (n-1) - (1-4^(log n))/(1-4) =
O(log n * n) + O(n) =
O(n log n)
And here is your answer: x = O(n log n), which dominates all other costs, so the total complexity of the algorithm is O(n log n).
You need to calculate how many times each line will execute.
j=1,k=0; // 1
while j<=n do //n+1
for l=1 to n-j do // ∑n
k=k+j; //∑n-1
end for
j=j*4; //n
end while
total complexity [add execution time of all lines]
= 1+(n+1)+ ∑ n + ∑ (n-1) + n
= 2n+2+ n^2/2 + n/2 + (n-1)^2/2 + (n-1)/2
take max term of above and skip constant factors then you will left with n^2
total runtime complexity will be o(n^2)
Looks like a homework question, but to give you a hint: The coplexity can be calculated by the amount of loops. One loop means O(n) two loops O(n^2) and three loops O(n^3).
This only goes for neste loops:
while () {
while () {
while() {
}
}
}
this is O(n^3)
But...
while () {
}
while() {
}
Still is O(n), because the loopsdo not run over each other and will stop after n iterations.
EDIT
The correct answer should be O(n*log(n)), beacuse of the inner for-loop the amount of iterations depends on the value of j. Which can be different every iteration.

O(n) runtime algorithm

The algorithm below has runtime O(n) according to our professor, however I am confused as to why it is not
O(n log(n)), because the outer loop can run up to log(n) times and the inner loop can run up to n times.
Algoritme Loop5(n)
i = 1
while i ≤ n
j = 1
while j ≤ i
j = j + 1
i = i∗2
Your professor was right, the running time is O(n).
In the i-th iteration of the outer while-loop, when we have i=2^k for k=0,1,...,log n, the inner while-loop makes O(i) iterations. (When I say log n I mean the base-2 logarithm log_2 n.)
The running time is O(1+2+2^2+2^3+...+2^k) for k=floor(log n). This sums to O(2^{k+1}) which is O(2^{log n}). (This follows from the formula for the partial sum of geometric series.)
Because 2^{log n} = n, the total running time is O(n).
For the interested, here's a proof that the powers of two really sum to what I claim they sum to. (This is a very special case of a more general result.)
Claim. For any natural k, we have 1+2+2^2+...+2^k = 2^{k+1}-1.
Proof. Note that (2-1)*(1+2+2^2+...+2^k) = (2 - 1) + (2^2 - 2) + ... + (2^{k+1} - 2^k) where all 2^i for 0<i<k+1 cancel out, except for i=0 and i=k+1, and we are left with 2^{k+1}-1. QED.

Counting the cost of nested for loops

I am trying to count the cost of the following algorithm in terms of a function of n.
for i:= 1 to n do
for j:= i to n do
k:=0
I understand that the inner for loop will iterate (n-1) + (n-2) + .... (n-n) times, however I don't know how to express this mathematically in a simpler form. How can I do this ?
(n-1) + (n-2) + .... (n-n) is equal to the sum of all integers from 0 to N-1. So it is equal to the N-1th triangular number, which can be found with the formula
Tn = n * (n+1) / 2
Which is equivalent to (1/2)*n^2 + (1/2)*n.
When calculating Big O complexity, you discard constant multipliers and all but the fastest-growing component, so an algorithm that takes (1/2)*n^2 + (1/2)*n steps to execute runs in O(n^2) time.
The inner loop, on average iterates (≈½n) times.
In "Big O" notation, you only care about the largest factor.
That is, for example, if you have:
n³ + n + log(n) + 1234
then the only thing that matters is the n³ factor, so O(n³).
So in your case:
½n x n = ½n²
which is O(n²) because the ½ doesn't matter.

How do I find the time complexity T(n) and show that it is tightly bounded (Big Theta)?

I'm trying to figure out how to give a worst case time complexity. I'm not sure about my analysis. I have read nested for loops big O is n^2; is this correct for a for loop with a while loop inside?
// A is an array of real numbers.
// The size of A is n. i,j are of type int, key is
// of type real.
Procedure IS(A)
for j = 2 to length[A]
{
key = A[ j ]
i = j-1
while i>0 and A[i]>key
{
A[i+1] = A[i]
i=i-1
}
A[i+1] = key
}
so far I have:
j=2 (+1 op)
i>0 (+n ops)
A[i] > key (+n ops)
so T(n) = 2n+1?
But I'm not sure if I have to go inside of the while and for loops to analyze a worse case time complexity...
Now I have to prove that it is tightly bound, that is Big theta.
I've read that nested for loops have Big O of n^2. Is this also true for Big Theta? If not how would I go about finding Big Theta?!
**C1= C sub 1, C2= C sub 2, and no= n naught all are elements of positive real numbers
To find the T(n) I looked at the values of j and looked at how many times the while loop executed:
values of J: 2, 3, 4, ... n
Loop executes: 1, 2, 3, ... n
Analysis:
Take the summation of the while loop executions and recognize that it is (n(n+1))/2
I will assign this as my T(n) and prove it is tightly bounded by n^2.
That is n(n+1)/2= θ(n^2)
Scratch Work:
Find C1, C2, no є R+ such that 0 ≤ C1(n^2) ≤ (n(n+1))/2 ≤ C2(n^2)for all n ≥ no
To make 0 ≤ C1(n) true, C1, no, can be any positive reals
To make C1(n^2) ≤ (n(n+1))/2, C1 must be ≤ 1
To make (n(n+1))/2 ≤ C2(n^2), C2 must be ≥ 1
PF:
Find C1, C2, no є R+ such that 0 ≤ C1(n^2) ≤ (n(n+1))/2 ≤ C2(n^2) for all n ≥ no
Let C1= 1/2, C2= 1 and no = 1.
show that 0 ≤ C1(n^2) is true
C1(n^2)= n^2/2
n^2/2≥ no^2/2
⇒no^2/2≥ 0
1/2 > 0
Therefore C1(n^2) ≥ 0 is proven true!
show that C1(n^2) ≤ (n(n+1))/2 is true
C1(n^2) ≤ (n(n+1))/2
n^2/2 ≤ (n(n+1))/2
n^2 ≤ n(n+1)
n^2 ≤ n^2+n
0 ≤ n
This we know is true since n ≥ no = 1
Therefore C1(n^2) ≤ (n(n+1))/2 is proven true!
Show that (n(n+1))/2 ≤ C2(n^2) is true
(n(n+1))/2 ≤ C2(n^2)
(n+1)/2 ≤ C2(n)
n+1 ≤ 2 C2(n)
n+1 ≤ 2(n)
1 ≤ 2n-n
1 ≤ n(2-1) = n
1≤ n
Also, we know this to be true since n ≥ no = 1
Hence by 1, 2 and 3, θ(n^2 )= (n(n+1))/2 is true since
0 ≤ C1(n^2) ≤ (n(n+1))/2 ≤ C2(n^2) for all n ≥ no
Tell me what you thing guys... I'm trying to understand this material and would like y'alls input!
You seem to be implementing the insertion sort algorithm, which Wikipedia claims is O(N2).
Generally, you break down components based off your variable N rather than your constant C when dealing with Big-O. In your case, all you need to do is look at the loops.
Your two loops are (worse cases):
for j=2 to length[A]
i=j-1
while i > 0
/*action*/
i=i-1
The outer loop is O(N), because it directly relates to the number of elements.
Notice how your inner loop depends on the progress of the outer loop. That means that (ignoring off-by-one issues) the inner and outer loops are related as follows:
j's inner
value loops
----- -----
2 1
3 2
4 3
N N-1
----- -----
total (N-1)*N/2
So the total number of times that /*action*/ is encountered is (N2 - N)/2, which is O(N2).
Looking at the number of nested loops isn't the best way to go about getting a solution. It's better to look at the "work" that's being done in the code, under a heavy load N. For example,
for(int i = 0; i < a.size(); i++)
{
for(int j = 0; j < a.size(); j++)
{
// Do stuff
i++;
}
}
is O(N).
A function f is in Big-Theta of g if it is both in Big-Oh of g and Big-Omega of g. The worst case happens when the data A is monotonically decreasing function. Then, for every iteration of the outer loop, the while loop executes. If each statement contributed a time value of 1, then the total time would be 5*(1 + 2 + ... + n - 2) = 5*(n - 2)*(n - 1) / 2. This gives a quadratic dependence on the data. However, if the data A is a monotonically increasing sequence, the condition A[i] > key will always fail. Thus the outer loop executes in constant time, N - 3 times. The best case of f then has a linear dependence on the data. For the average case, we take the next number in A and find its place in the sorting that has previously occurred. On average, this number will be in the middle of this range, which implies the inner while loop will run half as often as in the worst case, giving a quadratic dependence on the data.
Big O (basically) about how many times the elements in your loop will be looked at in order to complete a task.
For example, a O(n) algorithm will iterate through every element just once.
A O(1) algorithm will not have to iterate through every element at all, it will know exactly where in the array to look because it has an index. An example of this is an array or hash table.
The reason a loop inside of a loop is O(n^2) is because every element in the loop has to be iterated over itself ^2 times. Changing the type of the loop has nothing to do with it since it's about # of iterations essentially.
There are approaches to algorithms that will allow you to reduce the number of iterations you need. An example of these are "divide & conquer" algorithms like Quicksort, which if I recall correctly is O(nlog(n)).
It's tough to come up with a better alternative to your example without knowing more specifically what you're trying to accomplish.

Resources