Calculating Big-O from given algorithm - algorithm

I'm currently studying for my exams where one of the questions will be calculating the big-o from a given algorithm. One of the questions from last year goes like:
T_compute(n) ∈ O(n)
Algorithm:
void func2(const int n) {
for (int i = 1; i <= n; i++)
compute(i);
}
What is the timecomplexity of func2? T_func2(n) ∈
Now the solution says that the time complexity is
T_func2(n) ∈ O(n/2(n-1))
Can anyone explain to me how they got to this solution?

Since we know the complexity of compute(n) to be O(n), we can, without loss of generality, analyze the complexity of func2(n) under the assumption that compute(n) = n, i.e.
T_func2(n) ∝ sum_{i = 1 to n} compute(i)
= sum_{i = 1 to n} i
= n(n+1)/2
Where in the last step we've used this summation rule.
Now, we could say that T_func2 ∈ O(n(n+1)/2) (I will assume that n(n-1) is a typo on your behalf), but this is just O(n^2).

Related

Precise Θ notation bound for the running time as a function

I'm studying for an exam, and i've come across the following question:
Provide a precise (Θ notation) bound for the running time as a
function of n for the following function
for i = 1 to n {
j = i
while j < n {
j = j + 4
}
}
I believe the answer would be O(n^2), although I'm certainly an amateur at the subject but m reasoning is the initial loop takes O(n) and the inner loop takes O(n/4) resulting in O(n^2/4). as O(n^2) is dominating it simplifies to O(n^2).
Any clarification would be appreciated.
If you proceed using Sigma notation, and obtain T(n) equals something, then you get Big Theta.
If T(n) is less or equal, then it's Big O.
If T(n) is greater or equal, then it's Big Omega.

Figuring out apple time

I am fairly new to big-o and i'm trying to figure out what the big o running time is for this small section of code. I know that usually if there i but does the whole array thing change anything? I'm fairly confused so any bit of input would be great. Thanks in advance!
public apple(int n)
{
int n = 0;
int apple = 0;
a = apple + n;
}
When determining algorithmic complexity in terms of Big Oh Notation, the most dominant term is used to determine the complexity. Although the complexity of this algorithm in detail could be said to be 1 + n + n + 1, the complexity of it is O(2n) because that is the sum of dominant terms.
If the complexity in an arbitrary algorithm was 2 + 5n + n*n then the the complexity would be O(n^2) where n was > 5. otherwise it'd be 5n

Selection Sort Recurrence Relation

up front this is a homework question but I am having a difficult time understanding recurrence relations. I've scoured the internet for examples and they are very vague to me. I understand that recurrence relations for recursive algorithms don't have a set way of handling each one but I am lost at how to understand these. Here's the algorithm I have to work with:
void selectionSort(int array[]) {
sort(array, 0);
}
void sort(int[] array, int i) {
if (i < array.length - 1)
{
int j = smallest(array, i); T(n)
int temp = array[i];
array[i] = array[j];
array[j] = temp;
sort(array, i + 1); T(n)
}
}
int smallest(int[] array, int j) T(n - k)
{
if (j == array.length - 1)
return array.length - 1;
int k = smallest(array, j + 1);
return array[j] < array[k] ? j : k;
}
So from what I understand this is what I'm coming up with: T(n) = T(n – 1) +cn + c
The T(n-1) represents the recursive function of sort and the added cn represents the recursive function of smallest which should decrease as n decreases since it's called only the amount of times that are remaining in the array each time. The constant multiplied by n is the amount of time to run the additional code in smallest and the additional constant is the amount of time to run the additional code in sort. Is this right? Am I completely off? Am I not explaining it correctly? Also the next step is to create a recursion tree out of this but I don't see this equation as the form T(n) = aT(n/b) + c which is the form needed for the tree if I understand this right. Also I don't see how my recurrence relation would get to n^2 if it is correct. This is my first post too so I apologize if I did something incorrect here. Thanks for the help!
The easiest way to compute the time complexity is to model the time complexity of each function with a separate recurrence relation.
We can model the time complexity of the function smallest with the recurrence relation S(n) = S(n-1)+O(1), S(1)=O(1). This obviously solves to S(n)=O(n).
We can model the time complexity of the sort function with T(n) = T(n-1) + S(n) + O(1), T(1)=O(1). The S(n) term comes in because we call smallest within the function sort. Because we know that S(n)=O(n) we can write T(n) = T(n-1) + O(n), and writing out the recurrence we get T(n)=O(n)+O(n-1)+...+O(1)=O(n^2).
So the total running time is O(n^2), as expected.
In selection sort algo
Our outer loop runs for n- 1 times (n is the length of the array) so n-1 passes would be made... and then element is compared with other elements ....so n-1 comparisons
T(n)=T(n-1) + n-1
Which can be proved as O(n^2) by solving the particular relation ..

Why is the Big-O complexity of this algorithm O(n^2)?

I know the big-O complexity of this algorithm is O(n^2), but I cannot understand why.
int sum = 0;
int i = 1; j = n * n;
while (i++ < j--)
sum++;
Even though we set j = n * n at the beginning, we increment i and decrement j during each iteration, so shouldn't the resulting number of iterations be a lot less than n*n?
During every iteration you increment i and decrement j which is equivalent to just incrementing i by 2. Therefore, total number of iterations is n^2 / 2 and that is still O(n^2).
big-O complexity ignores coefficients. For example: O(n), O(2n), and O(1000n) are all the same O(n) running time. Likewise, O(n^2) and O(0.5n^2) are both O(n^2) running time.
In your situation, you're essentially incrementing your loop counter by 2 each time through your loop (since j-- has the same effect as i++). So your running time is O(0.5n^2), but that's the same as O(n^2) when you remove the coefficient.
You will have exactly n*n/2 loop iterations (or (n*n-1)/2 if n is odd).
In the big O notation we have O((n*n-1)/2) = O(n*n/2) = O(n*n) because constant factors "don't count".
Your algorithm is equivalent to
while (i += 2 < n*n)
...
which is O(n^2/2) which is the same to O(n^2) because big O complexity does not care about constants.
Let m be the number of iterations taken. Then,
i+m = n^2 - m
which gives,
m = (n^2-i)/2
In Big-O notation, this implies a complexity of O(n^2).
Yes, this algorithm is O(n^2).
To calculate complexity, we have a table the complexities:
O(1)
O(log n)
O(n)
O(n log n)
O(n²)
O(n^a)
O(a^n)
O(n!)
Each row represent a set of algorithms. A set of algorithms that is in O(1), too it is in O(n), and O(n^2), etc. But not at reverse. So, your algorithm realize n*n/2 sentences.
O(n) < O(nlogn) < O(n*n/2) < O(n²)
So, the set of algorithms that include the complexity of your algorithm, is O(n²), because O(n) and O(nlogn) are smaller.
For example:
To n = 100, sum = 5000. => 100 O(n) < 200 O(n·logn) < 5000 (n*n/2) < 10000(n^2)
I'm sorry for my english.
Even though we set j = n * n at the beginning, we increment i and decrement j during each iteration, so shouldn't the resulting number of iterations be a lot less than n*n?
Yes! That's why it's O(n^2). By the same logic, it's a lot less than n * n * n, which makes it O(n^3). It's even O(6^n), by similar logic.
big-O gives you information about upper bounds.
I believe you are trying to ask why the complexity is theta(n) or omega(n), but if you're just trying to understand what big-O is, you really need to understand that it gives upper bounds on functions first and foremost.

What is the big-O complexity of this code

Whats the big-O notation of this code?
for( int i=1; i<2n; i++)
x=x+1;
My answer = O(2*n) Is this correct?
Consider this an A algorithm
for( int i=1; i<2*n; i++)
x=x+1;
Algorithm A’s run-time: T(n) = 2n-1
Eliminate lower-order terms: 2n-1 -> 2n
Drop all constant coefficients: 2n -> n
So the algorithm A’s time complexity is O(n).
It is O(n). Big O is meant to describe the complexity of the application and in this case it is linear so it is O(n).
The big-o run time of this is O(2n) like you guessed but that is usually just simplified to O(n).

Resources