I'm studying for an exam, and i've come across the following question:
Provide a precise (Θ notation) bound for the running time as a
function of n for the following function
for i = 1 to n {
j = i
while j < n {
j = j + 4
}
}
I believe the answer would be O(n^2), although I'm certainly an amateur at the subject but m reasoning is the initial loop takes O(n) and the inner loop takes O(n/4) resulting in O(n^2/4). as O(n^2) is dominating it simplifies to O(n^2).
Any clarification would be appreciated.
If you proceed using Sigma notation, and obtain T(n) equals something, then you get Big Theta.
If T(n) is less or equal, then it's Big O.
If T(n) is greater or equal, then it's Big Omega.
Related
From my textbook we have an algorithm that checks if there is a duplicate in an array, once the duplicate in the array is found, the method ends (this is the naive approach).
I am asked to find the Big Ω complexity in the worst case.
In this sort of example, the worst case would occur when the duplicates we are looking for are side by side in the end, or there is no duplicate in the array.
I am more confused of the run time on each line.
For example in the code i posted, the first for loop will run for (n-1) times were we have 9 inputs.
how many times, in n, would the second for loop run for in the worst case? It would check 36 times in this case.
I know that the worst case in Big O complexity would be (n^2).
I know Big O means that f(n) must at most reach (<=) g(n).
I know that Big Ω means that f(n) must at least reach (>=) g(n).
public class test2{
public static void main(String[] args){
int[] arrayint = new int[]{5,2,3,7,1,9,6,4,4};
for(int i = 0; i<arrayint.length-1; i++){
for(int j = i+1; j<arrayint.length;j++){
if(arrayint[i] == arrayint[j]){
System.out.println("duplicate found at " + i + " and " + j);
System.exit(0);
}
}
}
}
}
I know that the worst case in Big O complexity would be (n^2). I know
Big O means that f(n) must at most reach (<=) g(n). I know that Big Ω
means that f(n) must at least reach (>=) g(n).
Actually, big O means f(n) (for large enough inputs) must not reach c1 * g(n) - for some constant c1.
Similarly, for big Omega (Ω), f(n) must be higher than c2 * g(n) (again, for large enough inputs).
This means, it's fine to have the same g(n) for both big O and big Omega (Ω), because you can have:
c2 * g(n) <= f(n) <= c1 * g(n)
When this happens we say f(n) is in Ө(g(n)), which mean the asymptotic bound is tight. You can find more information in this thread about big Theta(Ө)
In your case, the worst case is when there are no duplicates in the array. In this case, both O(n^2) and Ω(n^2), which gives a tight bound of Ө(n^2).
As a side note, this problem is called the element distinctness problem, which is interesting because the problem itself (not the algorithm) has several asymptotic bounds, based on the computation model we use.
When i=0, then j will vary from 1 to 8. The inner loop executes 8 times.
When i=1, the inner loop executes 7 times (j = 2 to 8)
In general, when i=x, the inner loop executes n-i-1 times.
If you sum all the iterations, you get (n*(n-1))/2 iterations. So for 9 items, that's (9*(9-1))/2), or 36 times.
I am fairly new to big-o and i'm trying to figure out what the big o running time is for this small section of code. I know that usually if there i but does the whole array thing change anything? I'm fairly confused so any bit of input would be great. Thanks in advance!
public apple(int n)
{
int n = 0;
int apple = 0;
a = apple + n;
}
When determining algorithmic complexity in terms of Big Oh Notation, the most dominant term is used to determine the complexity. Although the complexity of this algorithm in detail could be said to be 1 + n + n + 1, the complexity of it is O(2n) because that is the sum of dominant terms.
If the complexity in an arbitrary algorithm was 2 + 5n + n*n then the the complexity would be O(n^2) where n was > 5. otherwise it'd be 5n
I am trying to solve the following question on computational complexity:
Compute the computational complexity of the following algorithm and
write down its complexity in Big O, Big Omega and Theta
for i=1 to m {
x(i) =0;
for j=1 to n {
x(i) = x(i) + A(i,j) * b(j)
}
}
where A is mxn and b is nx1.
I ended up with Big O O(mn^2)
Big Omega(1) and Theta(mn^2).
Assuming that the following statement runs in constant time:
x(i) = x(i) + A(i,j) * b(j)
this is thus done in O(1), and does not depend on the values for i and j. Since you iterate over this statement in the inner for loop, exactly n times, you can say that the following code runs in O(n):
x(i) =0;
for j=1 to n {
meth1
}
(assuming the assignment is done in constant time as well). Again it does not depend on the exact value for i. Finally we take the outer loop into account:
for i=1 to m {
meth2
}
The method meth2 is repeated exactly m times, thus a tight upper bound for the time complexity in O(n m).
Since there are no conditional statements, nor recursive and the structure of the data A, b and x does not change the execution of the program, the algorithm is also big Omega(m n) and big Theta(m n).
Of course you can over-estimate big oh and under-estimate big omega: for every algorithm you can say it is Ω(1) and for some that it is O(2n), but the point is that you do not buy much with that.
Recall that f = Theta(g) if and only if f=O(g) and f=Omega(g).
The matrix-vector product can be computed in Theta(mn) time (assuming naive implementation) and the sum of vectors in O(m), so the total running time is Theta(mn). From here it follows that the time is also O(mn) and Omega(mn).
How can I represent its complexity with Big-O notation? I am little bit confused since the second for loop changes according to the index of the outer loop. Is it still O(n^2)? or less complex? Thanks in advance
for (int k = 0; k<arr.length; k++){
for (m = k; m<arr.length; m++){
//do something
}
}
Your estimation comes from progression formula:
and, thus, is O(n^2). Why your case is progression? Because it's n + (n-1) + ... + 1 summation for your loops.
If you add all iterations of the second loop, you get 1+2+3+...+n, which is equal with n(n+1)/2 (n is the array length). That is n^2/2 + n/2. As you may already know, the relevant term in big-oh notation is the one qith the biggest power, and coeficients are not relevant. So, your complexity is still O(n^2).
well the runtime is cca half of the n^2 loop
but in big O notation it is still O(n^2)
because any constant time/cycle - operation is represented as O(1)
so O((n^2)/2) -> O((n^2)/c) -> O(n^2)
unofficially there are many people using O((n^2)/2) including me for own purposes (its more intuitive and comparable) ... closer to cycle/runtime
hope it helps
I'm taking Data Structures and Algorithm course and I'm stuck at this recursive equation:
T(n) = logn*T(logn) + n
obviously this can't be handled with the use of the Master Theorem, so I was wondering if anybody has any ideas for solving this recursive equation. I'm pretty sure that it should be solved with a change in the parameters, like considering n to be 2^m , but I couldn't manage to find any good fix.
The answer is Theta(n). To prove something is Theta(n), you have to show it is Omega(n) and O(n). Omega(n) in this case is obvious because T(n)>=n. To show that T(n)=O(n), first
Pick a large finite value N such that log(n)^2 < n/100 for all n>N. This is possible because log(n)^2=o(n).
Pick a constant C>100 such that T(n)<Cn for all n<=N. This is possible due to the fact that N is finite.
We will show inductively that T(n)<Cn for all n>N. Since log(n)<n, by the induction hypothesis, we have:
T(n) < n + log(n) C log(n)
= n + C log(n)^2
< n + (C/100) n
= C * (1/100 + 1/C) * n
< C/50 * n
< C*n
In fact, for this function it is even possible to show that T(n) = n + o(n) using a similar argument.
This is by no means an official proof but I think it goes like this.
The key is the + n part. Because of this, T is bounded below by o(n). (or should that be big omega? I'm rusty.) So let's assume that T(n) = O(n) and have a go at that.
Substitute into the original relation
T(n) = (log n)O(log n) + n
= O(log^2(n)) + O(n)
= O(n)
So it still holds.