Big O Notation explanation [duplicate] - big-o

This question already has answers here:
Big O, how do you calculate/approximate it?
(24 answers)
Closed 1 year ago.
what is the Big O for below series
1+2+3+4+....+N
if I've to write a code for the series, it will be like
public void sum(int n){
int sum =0;
for(int i=1;i<=n;i++){
sum += i;
}
print(sum);
}
based on the above code its O(N)
Somewhere (in a udemy course) I read the order of the series is O(N square). why?

This below code has runtime O(N).
public void sum(int n){
int sum =0;
for(int i=1;i<=n;i++){
sum=+i;
}
print(sum);
}
However
O(1+2+3+...N) is O(N^2) since O(1+2+3+...N)=O(N(N+1)/2)=O(N^2).
I am guessing you are reading about the second statement and you confuse the two.

You are confusing the complexity of computing 1 + 2 + ... + N (by summing) with the result of computing it.
Consider the cost function f(N) = 1 + 2 + ... + N. That simplifies to N(N + 1)/2, and has the complexity O(N^2).
(I expect that you learned that sum in your high school maths course. They may even have even taught you how to prove it ... by induction.)
On the other hand, the algorithm
public void sum(int n){
int sum = 0;
for(int i = 1; i <= n; i++){
sum += i;
}
print(sum);
}
computes 1 + 2 + ... + N by performing N additions. When do a full analysis of the algorithm taking into account all of the computation, the cost function will be in the complexity class O(N).
But I can also compute 1 + 2 + ... + N with a simpler algorithm that makes use of our match knowledge:
public void sum(int n){
print(n * (n + 1) / 2);
}
This alternative algorithm's cost function is O(1) !!!
Lesson: don't confuse an algorithm's cost function with its result.

Related

Time Complexity- For Loop [duplicate]

This question already has answers here:
How can I find the time complexity of an algorithm?
(10 answers)
Closed 2 years ago.
I have a Question about calculating the time complexity with O-Notation . We have given this Code ..
int a=0;
For (int j=0; j<n ; j++){
For(int i=0; i*i<j; i++){
a++; }
}
I think the solution ist O(n^2) That for the first for loop we need n and for the second we need n... but I as I answerd the exam Questions..I got zero points for it
... Also for another code
int g(int y){
If (y<10){
Return 1;}
else {
int a=0;
for ( int i=0;i<n;j++) {
a++;}
return a+g(2(y/3)+1)+g(2(y/3)+2)+g(2(y/3)+3);}
}
I think the solution ist O(n) , That the variables time won't be calculated... the if sentence has a constant time O(1) and would be dominated by the for loop and the for loop would have O(n)
.... Also any advises or resources that explains how a program time would be calculated? And thank you :) 😃
For the first code, you have:
T(n) = 1 + sqrt(2) + ... + sqrt(n) = Theta(n\sqrt(n))
As i*i < j means i < sqrt(j). For the second, you can use Akra-Bazzi theorem:
T(n) = T(2n/3+1) + T(2n/3+2) + T(2n/3+3) + n
and reach to T(n) = 3 T(2n/3) + n to use the master thorem (~O(n^2.7))

What would be the tight asymptotic runtime (Big Theta) for these algorithms?

Question 1
public void guessWhat1(int N){
for (int i=N; i>0, i=i/2){
for (int j=0; j<i*2; j+=1){
System.out.println(“Hello World”);
}
}
}
The first loop will run for log(n).
The second loop will run for log(n).
The upper bound is O(log^2(n). What would be Big Θ?
Question 2
public void guessWhat2(int N) {
int i=1, s=1;
while (s<=N) {
i += 1;
s = s + i;
}
}
The upper bound for this is O(n). I am not quite sure about the Big Θ.
It would great if someone could clarify on these. Thank You
Lets get clear with the definitions of the notations first.
Big O: It denotes the upper bound of the algorithm.
Big Theta: It denotes the average bound of the algorithm.
For your first question
public void guessWhat1(int N){
for (int i=N; i>0, i=i/2){
for (int j=0; j<i*2; j+=1){
System.out.println(“Hello World”);
}
}
}
For i=N, inner loop runs 2N times, i=N/2 inner loop runs for N times, for i=N/4 inner loop runs for N/2 times.....
so the total complexity = O(2N+N+N/2+N/4+...+1)
which is equal to O(N(2+1+1/2+1/4+....1/N))= O(N(3+1/2+1/4+....1/N))
N(3+1/2+1/4+....1/N)
= N( 3 + 1 - (0.5)^logN ) = O(N(4-1/N)) = O(N)
So the complexity is O(N), even in theta notation its the same N as the above loops takes the same time for all cases.
For your second question
public void guessWhat2(int N) {
int i=1, s=1;
while (s<=N) {
i += 1;
s = s + i;
}
}
The while loop takes O(sqrt(N)). Same as above, here also the theta notation will also be the same as big O notation, which is sqrt(N).
The theta notation varies from big O if input has multiple cases. Lets take an example of insertion sort https://en.wikipedia.org/wiki/Insertion_sort where N is the size of the input array. If the input array is already sorted it takes linear time, but if the input array is reverse sorted it takes N^2 time to sort the array.
So in that case for insertion sort, the time complexity is O(N^2).
For best case it is theta(N) and for worst case its theta(N^2).

Finding Big O notation

I have the following code and I want to find the Big O. I wrote my answers as comments and would like to check my answers for each sentence and the final result.
public static void findBigO(int [] x, int n)
{
//1 time
for (int i = 0; i < n; i += 2) //n time
x[i] += 2; //n+1 time
int i = 1; //1 time
while (i <= n/2) // n time
{
x[i] += x[i+1]; // n+1 time
i++; //n+1 time
}
} //0
//result: 1 + n + n+1 + n + n+1 + n+1 = O(n)
First of all: simple sums and increments are O(1), they are made in constant time so x[i] += 2; is constant since array indexing is also O(1) the same is true for i++ and the like.
Second: The complexity of a function is relative to its input size, so in fact this function's time complexity is only pseudo-polynomial
Since n is an integer, the loop takes about n/2 interactions which is linear on the value of n but not on the size of n (4 bytes or log(n)).
So this algorithm is in fact exponential on the size of n.
for (int i = 0; i < n; i += 2) // O(n)
x[i] += 2;
int i = 1;
while (i <= n/2) // O(n/2)
{
x[i] += x[i+1];
i++;
}
O(n) + O(n/2) = O(n) in terms of Big O.
You have to watch out for nested loops that depend on n, if (as I first thought thanks to double usage of i) you would've had that O(n) * O(n/2), which is O(n^2). In the first case it is in fact about O(1,5n) + C However that is never ever used to describe an Ordo.
With Big O you push the values towards infinity, no matter how large C you have it will in the end be obsolete, just as if it is 1.000.000n or n. The prefix will eventually be obsolete.
That being said, the modifiers of n as well as the constants do matter, just not in Ordo context.

Mergesort recurrence formulas - reconciling reality with textbooks

I think this is more programming than math, so I posted here.
All the java algorithms in my question come from here.
We have an iterative and recursive merge sort. Both using the same merge function.
The professor teaching this lecture says that the critical operation for merge sort is the comparison.
So I came up with this formula for merge() based on compares:
>3n + 2
3: worst case compares through each loop.
n: amount of times loop will iterate.
2: the "test" compares.
The recursiveMergesort() has the base case compare plus the recursive calls for a total of:
>T(n/2) + 1 + 3n + 2 = T(n/2) + 3n + 3
The iterativeMergesort() simply has one loop that runs *n/2* times with a nested loop that runs n times. That leads me to this formula (but I think it's wrong):
>(n/2) * n + 3n + 2 = (n^2)/2 + 3n + 2
The books say that the recurrence formula for the recursive mergesort is
2T(n/2) + theta(n)
Which solves with the master method to
theta(NlogN)
Question 1:
How are the formulas I created simplified to
T(n/2) + theta(n)
Question 2:
Can I use any of these formulas (the ones I created, the textbook formula, or the time complexity *theta(nlogn)*) to predict number of compares when running this particular algorithm on an array size n
Question 3:
For the bonus: Is my formula for the iterative method correct?
Merge:
private static void merge(int[] a, int[] aux, int lo, int mid, int hi) {
// DK: add two tests to first verify "mid" and "hi" are in range
if (mid >= a.length) return;
if (hi > a.length) hi = a.length;
int i = lo, j = mid;
for (int k = lo; k < hi; k++) {
if (i == mid) aux[k] = a[j++];
else if (j == hi) aux[k] = a[i++];
else if (a[j] < a[i]) aux[k] = a[j++];
else aux[k] = a[i++];
}
// copy back
for (int k = lo; k < hi; k++)
a[k] = aux[k];
}
Recursive Merge sort:
public static void recursiveMergesort(int[] a, int[] aux, int lo, int hi) {
// base case
if (hi - lo <= 1) return;
// sort each half, recursively
int mid = lo + (hi - lo) / 2;
recursiveMergesort(a, aux, lo, mid);
recursiveMergesort(a, aux, mid, hi);
// merge back together
merge(a, aux, lo, mid, hi);
}
public static void recursiveMergesort(int[] a) {
int n = a.length;
int[] aux = new int[n];
recursiveMergesort(a, aux, 0, n);
}
Iterative merge sort:
public static void iterativeMergesort(int[] a) {
int[] aux = new int[a.length];
for (int blockSize=1; blockSize<a.length; blockSize*=2)
for (int start=0; start<a.length; start+=2*blockSize)
merge(a, aux, start, start+blockSize, start+2*blockSize);
}
Wow, you made it all the way down here. Thanks!
Question 1:
Where are you getting your facts? To obtain theta(nlogn) complexity you need
T(n) = a T(n/b) + f(n), where a > 1, b > 1 and f(n) = cn + d. c != 0
Note: There are additional constraints, dictated by the Master theorem
You cannot derive from reccurence relation based on T(n) > T(n/2) + 3n + 3. You probably forgot that the cost of an array of size n is the cost of the merge plus twice the cost of each part. So rather
T(n) = 2T(n/2) + 3n + 3
Question 2:
You cannot use theta, Big O or Big Omega to predict number of compares when running this particular algorithm on an array size n. Because they are asymptotic expressions. You need to solve the relation above, assuming it is correct.
For instance T(n) = 2T(n/2) + 3n + 3 has the solution
T(n) = 3n log2(n) + 1/2(c+6)n - 3, c constant
Still this is the number of comparisons of the algorithm. All optimizations and constraints of a real program are not considered.
Question 3:
Nope

Complexity of algorithm

What is the complexity given for the following problem is O(n). Shouldn't it be
O(n^2)? That is because the outer loop is O(n) and inner is also O(n), therefore n*n = O(n^2)?
The answer sheet of this question states that the answer is O(n). How is that possible?
public static void q1d(int n) {
int count = 0;
for (int i = 0; i < n; i++) {
count++;
for (int j = 0; j < n; j++) {
count++;
}
}
}
The complexity for the following problem is O(n^2), how can you obtain that? Can someone please elaborate?
public static void q1E(int n) {
int count = 0;
for (int i = 0; i < n; i++) {
count++;
for (int j = 0; j < n/2; j++) {
count++;
}
}
}
Thanks
The first example is O(n^2), so it seems they've made a mistake. To calculate (informally) the second example, we can do n * (n/2) = (n^2)/2 = O(n^2). If this doesn't make sense, you need to go and brush up what the meaning of something being O(n^k) is.
The complexity of both code is O(n*n)
FIRST
The outer loop runs n times and the inner loop varies from 0 to n-1 times
so
total = 1 + 2 + 3 + 4 ... + n
which if you add the arithmetic progression is n * ( n + 1 ) / 2 is O(n*n)
SECOND
The outer loop runs n times and the inner loop varies from 0 to n-1/2 times
so
total = 1 + 1/2 + 3/2 + 4/2 ... + n/2
which if you add the arithmetic progression is n * ( n + 1 ) / 4 is also O(n*n)
First case is definitely O(n^2)
The second is O(n^2) as well because you omit constants when calculate big O
Your answer sheet is wrong, the first algorithm is clearly O(n^2).
Big-Oh notation is "worst case" so when calculating the Big-Oh value, we generally ignore multiplications / divisions by constants.
That being said, your second example is also O(n^2) in the worst case because, although the inner loop is "only" 1/2 n, the n is the clear bounding factor. In practice the second algorithm will be less than O(n^2) operations -- but Big-Oh is intended to be a "worst case" (ie. maximal bounding) measurement, so the exact number of operations is ignored in favor of focusing on how the algorithm behaves as n approaches infinity.
Both are O(n^2). Your answer is wrong. Or you may have written the question incorrectly.

Resources