BruteForceMedian algorithm appears to have a quadratic efficiency. (Why?) - algorithm

This algorithm appears to have a quadratic efficiency. (Why?)

To analyze complexity, you just have to count the number of operations.
Here, there are two nested loops:
for i in 0 to n – 1 do
for j in 0 to n – 1 do
operation() // Do something
done
done
With i = 0, operation will be ran for all j in [0,n-1] that is n times. Then increment i, and repeat until i > n-1. That is the operation is ran n*n times in the worst case.
So in the end, this codes does n^2 operations, that's why it has quadratic efficiency.

This is a trick question, it appears to be both n^2 and 2^n at the same time depending on the whether the if statement on line j is executed.

Related

Time complexity of an algorithm that runs 1+2+...+n times;

To start off I found this stackoverflow question that references the time complexity to be O(n^2), but it doesn't answer the question of why O(n^2) is the time complexity but instead asks for an example of such an algorithm. From my understanding an algorithm that runs 1+2+3+...+n times would be
less than O(n^2). For example, take this function
function(n: number) {
let sum = 0;
for(let i = 0; i < n; i++) {
for(let j = 0; j < i+1; j++) {
sum += 1;
}
}
return sum;
}
Here are some input and return values
num
sum
1
1
2
3
3
6
4
10
5
15
6
21
7
28
From this table you can see that this algorithm runs in less than O(n^2) but more than O(n). I also realize than algorithm that runs 1+(1+2)+(1+2+3)+...+(1+2+3+...+n) is true O(n^2) time complexity. For the algorithm stated in the problem, do we just say it runs in O(n^2) because it runs more than O(log n) times?
It's known that 1 + 2 + ... + n has a short form of n * (n + 1) / 2. Even if you didn't know that, you have to consider that, when i gets to n, the inner loop runs at most n times. So you have exactly n times (for outer loop i), each running at most n times (for inner loop j), so the O(n^2) becomes more apparent.
I agree that the complexity would be exactly n^2 if the inner loop also ran from 0 to n, so you have your reasons to think that a loop i from 0 to n and another loop j from 0 to i has to perform better and that's true, but with big Oh notation you're actually measuring the degree of algorithm's complexity, not the exact number of operations.
p.s. O(log n) is usually achieved when you split the main problem into sub-problems.
I think you should interpret the table differently. The O(N^2) complexity says that if you double the input N, the runtime should quadruple (take 4 times as long). In this case, the function(n: number) returns a number mirroring its runtime. I use f(N) as a short for it.
So say N goes from 1 to 2, which means the input has doubled (2/1 = 2). The runtime then has gone from f(1) to f(2), which means it has increased f(2)/f(1) = 3/1 = 3 times. That is not 4 times, but the Big-O complexity measure is asymptotic, dealing with the situation where N approaches infinity. If we test another input doubling from the table, we have f(6)/f(3) = 21/6 = 3.5. It is already closer to 4.
Let us now stray outside the table and try more doublings with bigger N. For example we have f(200)/f(100) = 20100/5050 = 3.980 and f(5000)/f(2500) = 12502500/3126250 = 3.999. The trend is clear. As N approaches infinity, a doubled input tends toward a quadrupled runtime. And that is the hallmark of O(N^2).

is this program quadratic or linear?

I have the following pseudocode:
for i=1 to 3*n
for j=1 to i*i
for k=1 to j
if j mod i=1 then
s=s+1
endif
next k
next j
next i
When I want to analyze the number of times the part s=s+1 is performed, assuming that this operation takes constant time, I end up with a quadratic complexity or is it linear? The value of n can be any positive integer.
The calculations that I made are the following:
Definitely not quadratic, but should at least be polynomial.
It goes through 3n iterations.
On each iteration it does 9n2 more.
On each of those it does up to 9n2 more.
So I think it would be O(n5).
When talking about running time, you should always make explicit in terms of what you are defining your running time.
If we assume you are talking about your running time in terms of n, the answer is O(n^5). This is because what you are doing boils down to this, when we get rid of the constant factors:
do n times:
do n^2 times:
do n^2 times:
do something
And n * n^2 * n^2 = n^5

theoretical analysis of comparisons

I'm first asked to develop a simple sorting algorithm that sorts an array of integers in ascending order and put it to code:
int i, j;
for ( i = 0; i < n - 1; i++)
{
if(A[i] > A[i+1])
swap(A, i+1, i);
for (j = n - 2; j >0 ; j--)
if(A[j] < A[j-1])
swap(A, j-1, j);
}
Now that I have the sort function, I'm asked to do a theoretical analysis for the running time of the algorithm. It says that the answer is O(n^2) but I'm not quite sure how to prove that complexity.
What I know so far is that the 1st loop runs from 0 to n-1, (so n-1 times), and the 2nd loop from n-2 to 0, (so n-2 times).
Doing the recurrence relation:
let C(n) = the number of comparisons
for C(2) = C(n-1) + C(n-2)
= C(1) + C(0)
C(2) = 0 comparisons?
C(n) in general would then be: C(n-1) + C(n-2) comparisons?
If anyone could guide my step by step, that would be greatly appreciated.
When doing a "real" big O - time complexity analysis, you select one operation that you count, obviously the one that dominates the running time. In your case you could either choose the comparison or the swap, since worst case there will be a lot of swaps right?
Then you calculate how many times this will be evoked, scaling to input. So in your case you are quite right with your analysis, you simply do this:
C = O((n - 1)(n - 2)) = O(n^2 -3n + 2) = O(n^2)
I come up with these numbers through reasoning about the flow of data in your code. You have one outer for-loop iterating right? Inside that for-loop you have another for-loop iterating. The first for-loop iterates n - 1 times, and the second one n - 2 times. Since they are nested, the actual number of iterations are actually the multiplication of these two, because for every iteration in the outer loop, the whole inner loop runs, doing n - 2 iterations.
As you might know you always remove all but the dominating term when doing time complexity analysis.
There is a lot to add about worst-case complexity and average case, lower bounds, but this will hopefully make you grasp how to reason about big O time complexity analysis.
I've seen a lot of different techniques for actually analyzing the expression, such as your recurrence relation. However I personally prefer to just reason about the code instead. There are few algorithms which have hard upper bounds to compute, lower bounds on the other hand are in general very hard to compute.
Your analysis is correct: the outer loop makes n-1 iterations. The inner loop makes n-2.
So, for each iteration of the outer loop, you have n-2 iterations on the internal loop. Thus, the total number of steps is (n-1)(n-2) = n^2-3n+2.
The dominating term (which is what matters in big-O analysis) is n^2, so you get O(n^2) runtime.
I personally wouldn't use the recurrence method in this case. Writing the recurrence equation is usually helpful in recursive functions, but in simpler algorithms like this, sometimes it's just easier to look at the code and do some simple math.

Need help understanding Big-O

I'm in a Data Structures class now, and we're covering Big-O as a means of algorithm analysis. Unfortunately after many hours of study, I'm still somewhat confused. I understand what Big-O is, and several good code examples I found online make sense. However I have a homework question I don't understand. Any explanation of the following would be greatly appreciated.
Determine how many times the output statement is executed in each of
the following fragments (give a number in terms of n). Then indicate
whether the algorithm is O(n) or O(n2):
for (int i = 0; i < n; i++)
for (int j = 0; j < i; j++)
if (j % i == 0)
System.out.println(i + ” ” + j);
Suppose n = 5. Then, the values of i would be 0, 1, 2, 3, and 4. This means that means that the inner loop will iterate 1, 2, 3, 4, and 5 times, respectively. Because of this, the total number of times that the if comparison will execute is 1+2+3+4+5. A mathematical formula for the sum of integers from 1 to n is n*(n+1)/2. Expanded, this gives us n^2 / 2 + n / 2.
Therefore, the algorithm itself is O(n^2).
For the number of times that something is printed, we need to look at the times that j%i=0. When j < i, the only time that this can be true is when j = 0, so this is the number of times that j = 0 and i is not 0. This means that it is only true once in each iteration of the outer loop, except the first iteration (when i = 0).
Therefore, System.out.println is called n-1 times.
A simple way to look at it is :
A single loop has a complexity of O(n)
A loop within a loop has a complexity of O(n^2) and so on.
So the above loop has a complexity of O(n^2)
This function appears to execute in Quadratic Time - O(n^2).
Here's a trick for something like this. For each nested for loop add one to the exponent for n. If there was three loops this algorithm would run in cubic time O(n^3). If there is only one loop (no halving involved) then it would be linear O(n). If the array was halved each time (recursively or iteratively) it would be considered logarithmic time O(log n) -> base 2.
Hope that helps.

How is Summation(n) Theta(n^2) according to its formula but Theta(n) ij we just look at it as a single for loop?

Our prof and various materials say Summation(n) = (n) (n+1) /2 and hence is theta(n^2). But intuitively, we just need one loop to find the sum of first n terms! So, it has to be theta(n).I'm wondering what am I missing here?!
All of these answers are misunderstanding the problem just like the original question: The point is not to measure the runtime complexity of an algorithm for summing integers, it's talking about how to reason about the complexity of an algorithm which takes i steps during each pass for i in 1..n. Consider insertion sort: On each step i to insert one member of the original list the output list is i elements long, thus it takes i steps (on average) to perform the insert. What is the complexity of insertion sort? It's the sum of all of those steps, or the sum of i for i in 1..n. That sum is n(n+1)/2 which has an n^2 in it, thus insertion sort is O(n^2).
The running time of the this code is Θ(1) (assuming addition/subtraction and multiplaction are constant time operations):
result = n*(n + 1)/2 // This statement executes once
The running time of the following pseudocode, which is what you described, is indeed Θ(n):
result = 0
for i from 1 up to n:
result = result + i // This statement executes exactly n times
Here is another way to compute it which has a running time of Θ(n²):
result = 0
for i from 1 up to n:
for j from i up to n:
result = result + 1 // This statement executes exactly n*(n + 1)/2 times
All three of those code blocks compute the natural numbers' sum from 1 to n.
This Θ(n²) loop is probably the type you are being asked to analyse. Whenever you have a loop of the form:
for i from 1 up to n:
for j from i up to n:
// Some statements that run in constant time
You have a running time complexity of Θ(n²), because those statements execute exactly summation(n) times.
I think the problem is that you're incorrectly assuming that the summation formula has time complexity theta(n^2).
The formula has an n^2 in it, but it doesn't require a number of computations or amount of time proportional to n^2.
Summing everything up to n in a loop would be theta(n), as you say, because you would have to iterate through the loop n times.
However, calculating the result of the equation n(n+1)/2 would just be theta(1) as it's a single calculation that is performed once regardless of how big n is.
Summation(n) being n(n+1)/2 refers to the sum of numbers from 1 to n. Which is a mathematical formula and can be calculated without a loop which is O(1) time. If you iterate an array to sum all values that is an O(n) algorithm.

Resources