Whats the big-O notation of this code?
for( int i=1; i<2n; i++)
x=x+1;
My answer = O(2*n) Is this correct?
Consider this an A algorithm
for( int i=1; i<2*n; i++)
x=x+1;
Algorithm A’s run-time: T(n) = 2n-1
Eliminate lower-order terms: 2n-1 -> 2n
Drop all constant coefficients: 2n -> n
So the algorithm A’s time complexity is O(n).
It is O(n). Big O is meant to describe the complexity of the application and in this case it is linear so it is O(n).
The big-o run time of this is O(2n) like you guessed but that is usually just simplified to O(n).
Related
I'm currently studying for my exams where one of the questions will be calculating the big-o from a given algorithm. One of the questions from last year goes like:
T_compute(n) ∈ O(n)
Algorithm:
void func2(const int n) {
for (int i = 1; i <= n; i++)
compute(i);
}
What is the timecomplexity of func2? T_func2(n) ∈
Now the solution says that the time complexity is
T_func2(n) ∈ O(n/2(n-1))
Can anyone explain to me how they got to this solution?
Since we know the complexity of compute(n) to be O(n), we can, without loss of generality, analyze the complexity of func2(n) under the assumption that compute(n) = n, i.e.
T_func2(n) ∝ sum_{i = 1 to n} compute(i)
= sum_{i = 1 to n} i
= n(n+1)/2
Where in the last step we've used this summation rule.
Now, we could say that T_func2 ∈ O(n(n+1)/2) (I will assume that n(n-1) is a typo on your behalf), but this is just O(n^2).
sum = 0;
for (int i = 0; i < N; i++)
for (int j = i; j ≥ 0; j--)
sum++;
I found out the big-oh to be O(n^2) but I am not sure how to find the theta bound for it. Can someone help?
You can make use of sigma notation to derive asymptotic bounds for your algorithm:
Since we can show that T(n) O(n^2) (upper asymptotic bound) as well as in Ω(n^2) (lower asymptotic bound), it follows that T(n) is in Θ(n^2) ("sandwiched", upper and lower asymptotic bound).
In case it's unclear: note that the inner and out sums in the initial expressions above describe your inner and outer for loops, respectively.
I know the big-O complexity of this algorithm is O(n^2), but I cannot understand why.
int sum = 0;
int i = 1; j = n * n;
while (i++ < j--)
sum++;
Even though we set j = n * n at the beginning, we increment i and decrement j during each iteration, so shouldn't the resulting number of iterations be a lot less than n*n?
During every iteration you increment i and decrement j which is equivalent to just incrementing i by 2. Therefore, total number of iterations is n^2 / 2 and that is still O(n^2).
big-O complexity ignores coefficients. For example: O(n), O(2n), and O(1000n) are all the same O(n) running time. Likewise, O(n^2) and O(0.5n^2) are both O(n^2) running time.
In your situation, you're essentially incrementing your loop counter by 2 each time through your loop (since j-- has the same effect as i++). So your running time is O(0.5n^2), but that's the same as O(n^2) when you remove the coefficient.
You will have exactly n*n/2 loop iterations (or (n*n-1)/2 if n is odd).
In the big O notation we have O((n*n-1)/2) = O(n*n/2) = O(n*n) because constant factors "don't count".
Your algorithm is equivalent to
while (i += 2 < n*n)
...
which is O(n^2/2) which is the same to O(n^2) because big O complexity does not care about constants.
Let m be the number of iterations taken. Then,
i+m = n^2 - m
which gives,
m = (n^2-i)/2
In Big-O notation, this implies a complexity of O(n^2).
Yes, this algorithm is O(n^2).
To calculate complexity, we have a table the complexities:
O(1)
O(log n)
O(n)
O(n log n)
O(n²)
O(n^a)
O(a^n)
O(n!)
Each row represent a set of algorithms. A set of algorithms that is in O(1), too it is in O(n), and O(n^2), etc. But not at reverse. So, your algorithm realize n*n/2 sentences.
O(n) < O(nlogn) < O(n*n/2) < O(n²)
So, the set of algorithms that include the complexity of your algorithm, is O(n²), because O(n) and O(nlogn) are smaller.
For example:
To n = 100, sum = 5000. => 100 O(n) < 200 O(n·logn) < 5000 (n*n/2) < 10000(n^2)
I'm sorry for my english.
Even though we set j = n * n at the beginning, we increment i and decrement j during each iteration, so shouldn't the resulting number of iterations be a lot less than n*n?
Yes! That's why it's O(n^2). By the same logic, it's a lot less than n * n * n, which makes it O(n^3). It's even O(6^n), by similar logic.
big-O gives you information about upper bounds.
I believe you are trying to ask why the complexity is theta(n) or omega(n), but if you're just trying to understand what big-O is, you really need to understand that it gives upper bounds on functions first and foremost.
for example if we have
public void search(){
for (int i=0; i<n; i++){
.............
}
for (int j=0; j<n; j++){
.............
}
}
would search() be O(n) + O(n) = O(2n) ?
Yes, the time is linear in n, but because constant factors are not interesting in O-notation, you can still write O(n). Even if you had 10 consecutive for-loops from 1 to n, it would still be O(n). Constant factors are dropped.
Edit (answer to your comment):
If i, n and k are independent, yes, it would be O(k + (n-i)).
But you could simplify it if you for example know that k = O(n). (e.g. k ≈ n/2). Then you could write it as O(n - i) (because in O(2n - i), the 2 is dropped).
If i and k are both linearly dependent on n, it would even be O(n).
Yes, this is O(2n) which is equivalent to saying it is O(n), the coefficient doesn't matter really.
Big-O notation expresses time or space complexity. It does not measure time or space directly, it measures how the change in number of elements impacts time or space. While two loops are certainly slower than one, they both scale exactly the same: in direct proportion to n. That is what O(n) means.
(Slight simplification here, it does not account for why O(n^2 + n) = O(n^2), for example, but it should suffice for the matter at hand.)
What is the Worst Case Time Complexity t(n) :-
I'm reading this book about algorithms and as an example
how to get the T(n) for .... like the selection Sort Algorithm
Like if I'm dealing with the selectionSort(A[0..n-1])
//sorts a given array by selection sort
//input: An array A[0..n - 1] of orderable elements.
//output: Array A[0..n-1] sorted in ascending order
let me write a pseudocode
for i <----0 to n-2 do
min<--i
for j<--i+1 to n-1 do
ifA[j]<A[min] min <--j
swap A[i] and A[min]
--------I will write it in C# too---------------
private int[] a = new int[100];
// number of elements in array
private int x;
// Selection Sort Algorithm
public void sortArray()
{
int i, j;
int min, temp;
for( i = 0; i < x-1; i++ )
{
min = i;
for( j = i+1; j < x; j++ )
{
if( a[j] < a[min] )
{
min = j;
}
}
temp = a[i];
a[i] = a[min];
a[min] = temp;
}
}
==================
Now how to get the t(n) or as its known the worst case time complexity
That would be O(n^2).
The reason is you have a single for loop nested in another for loop. The run time for the inner for loop, O(n), happens for each iteration of the outer for loop, which again is O(n). The reason each of these individually are O(n) is because they take a linear amount of time given the size of the input. The larger the input the longer it takes on a linear scale, n.
To work out the math, which in this case is trivial, just multiple the complexity of the inner loop by the complexity of the outer loop. n * n = n^2. Because remember, for each n in the outer loop, you must again do n for the inner. To clarify: n times for each n.
O(n * n).
O(n^2)
By the way, you shouldn't mix up complexity (denoted by big-O) and the T function. The T function is the number of steps the algorithm has to go through for a given input.
So, the value of T(n) is the actual number of steps, whereas O(something) denotes a complexity. By the conventional abuse of notation, T(n) = O( f(n) ) means that the function T(n) is of at most the same complexity as another function f(n), which will usually be the simplest possible function of its complexity class.
This is useful because it allows us to focus on the big picture: We can now easily compare two algorithms that may have very different-looking T(n) functions by looking at how they perform "in the long run".
#sara jons
The slide set that you've referenced - and the algorithm therein
The complexity is being measured for each primitive/atomic operation in the for loop
for(j=0 ; j<n ; j++)
{
//...
}
The slides rate this loop as 2n+2 for the following reasons:
The initial set of j=0 (+1 op)
The comparison of j < n (n ops)
The increment of j++ (n ops)
The final condition to check if j < n (+1 op)
Secondly, the comparison within the for loop
if(STudID == A[j])
return true;
This is rated as n ops. Thus the result if you add up +1 op, n ops, n ops, +1 op, n ops = 3n+2 complexity. So T(n) = 3n+2
Recognize that T(n) is not the same as O(n).
Another doctoral-comp flashback here.
First, the T function is simply the amount of time (usually in some number of steps, about which more below) an algorithm takes to perform a task. What a "step" is, is somewhat defined by the use; for example, it's conventional to count the number of comparisons in sorting algorithms, but the number of elements searched in search algorithms.
When we talk about the worst-case time of an algorithm, we usually express that with "big-O notation". Thus, for example, you hear that bubble sort takes O(n²) time. When we use big O notation, what we're really saying is that the growth of some function -- in this case T -- is no faster than the growth of some other function times a constant. That is
T(n) = O(n²)
means for any n, no matter how large, there is a constant k for which T(n) ≤ kn². A point of some confustion here is that we're using the "=" sign in an overloaded fashion: it doesn't mean the two are equal in the numerical sense, just that we are saying that T(n) is bounded by kn².
In the example in your extended question, it looks like they're counting the number of comparisons in the for loop and in the test; it would help to be able to see the context and the question they're answering. In any case, though, it shows why we like big-O notation: W(n) here is O(n). (Proof: there exists a constant k, namely 5, for which W(n) ≤ k(3n)+2. It follows by the definition of O(n).)
If you want to learn more about this, consult any good algorithms text, eg, Introduction to Algorithms, by Cormen et al.
write pseudo codes to search, insert and remove student information from the hash table. calculate the best and the worst case time complexities
3n + 2 is the correct answer as far as the loop is concerned. At each step of the loop, 3 atomic operations are done. j++ is actually two operations, not one. and j