I came across this problem on asymptotic complexity of a function:
The complexities of 3 functions are as follows:
f(n) = O(n)
g(n) = Big-Omega(n)
h(n) = Theta(n)
So what will be the asymptotic complexity of the resultant function [f(n).g(n)] + h(n)
I can figure out the answer to this will be Big-Omega(n) by elementary hit and trial. For example if I say f(n) = n and g(n) = n and h(n) = n. So we can say f(n) is O(n) and g(n) is Big-Omega(n) and h(n) is Theta(n). Now f(n).g(n) is n2 and this will be Big-Omega(n) but not O(n). Now adding this to h(n) is n2+n. Which also is Big-Omega(n) but not Theta(n).
But I'm not able to figure out a proper logical or mathematical proof to this. Can someone please help me out with this?
Here's an attempt at a logical explanation:
f(n) = O(n) means that f's running time is at most linear (may be constant time).
h(n) = Theta(n) means that h's running time running is linear.
g(n) = Big-Omega(n) means that g's running time is atleast linear (may be polynomial, exponential... we don't know).
Now let's analyse the best case: f(n) is constant time, g(n) is linear, h(n) is linear. what can we say about the function f(n)*g(n)+h(n)? that it's also linear.
What can we say about the worst case? nothing, as we have no clue about the behaviour of g(n) in the worst case.
So we can conclude that f(n)*g(n)+h(n) = Big-Omega(n) as this function is linear at the best case.
Related
Why Big-O notation can not compare algorithms in the same complexity class. Please explain, I can not find any detailed explanation.
So, O(n^2) says that this algorithm requires less or equal number of operations to perform. So, when you have algorithm A which requires f(n) = 1000n^2 + 2000n + 3000 operations and algorithm B which requires g(n) = n^2 + 10^20 operations. They're both O(n^2)
For small n the first algorithm will perform better than the second one. And for big ns second algorithm looks better since it has 1 * n^2, but first has 1000 * n^2.
Also, h(n) = n is also O(n^2) and k(n) = 5 is O(n^2). So, I can say that k(n) is better than h(n) because I know how these functions look like.
Consider the case when I don't know how functions k(n) and h(n) look like. The only thing I'm given is k(n) ~ O(n^2), h(n) ~ O(n^2). Can I say which function is better? No.
Summary
You can't say which function is better because Big O notation stays for less or equal. And following is true
O(1) is O(n^2)
O(n) is O(n^2)
How to compare functions?
There is Big Omega notation which stays for greater or equal, for example f(n) = n^2 + n + 1, this function is Omega(n^2) and Omega(n) and Omega(1). When function has complexity equal to some asymptotic, Big Theta is used, so for f(n) described above we can say that:
f(n) is O(n^3)
f(n) is O(n^2)
f(n) is Omega(n^2)
f(n) is Omega(n)
f(n) is Theta(n^2) // this is only one way we can describe f(n) using theta notation
So, to compare asymptotics of functions you need to use Theta instead of Big O or Omega.
f(n) and g(n) represent the running time of two different algorithms. f(n) has algorithm complexity O(1), and g(n) has algorithm complexity O(n). Can we claim f(n)*g(n) has complexity O(n)? Why/Why not?
A mathematical proof:
If we want to proof that f(n)*g(n) is O(n) we must show that exists n0 and constant c such as :
f(n)*g(n) < c*n for every n>n0
We have as fact that f(n) is O(n) which means that there are c1,n1 :
f(n)<c1*n for every n>n1 (1)
and for g there are c2,n2:
g(n)<c2 for every n>n2 (2)
Now we have that, for every n>max(n1,n2) (max because we want both inequalities for f and for g to hold):
f(n)g(n)<c1*c2*n (by multiplying (1),(2))
so we proved that there is c=c1*c2 and n0=max(n1,n2) such as the below inequality holds:
f(n)g(n)<c*n -> f(n)*g(n) is O(n) for every n>n0.
O(n) is time complexity . When we multiply f(n) and g(n).The higher time complexity obtain so,algorithm complexity is O(n).
Studying for a test and getting this question:
Comparing two algorithms with asymptotic complexities O(n) and O(n + log(n)),
which one of the following is true?
A) O(n + log(n)) dominates O(n)
B) O(n) dominates O(n + log(n))
C) Neither algorithm dominates the other.
O(n) dominates log(n) correct? So in this do we just take o(n) from both and deduce neither dominate?
[C] is true, because of the summation property of Big-O
Summation
O(f(n)) + O(g(n)) -> O(max(f(n), g(n)))
For example: O(n^2) + O(n) = O(n^2)
In Big-O, you only care about the largest-growing function and ignore all the other additives.
Edit: originally I put [A] as an answer, I just didn't put much attention to all the options and misinterpreted the [A] option. Here is more formal proof
O(n) ~ O(n + log(n)) <=>
O(n) ~ O(n) + O(log(n)) <=>
O(n) ~ O(n).
Yes, that's correct. If runtime is the sum of several runtimes, by order of magnitude, the largest order of magnitude dominates.
Assuming that big-O notation is used in the sense of asymptotic tight bound, which really should be denoted with a big-Theta, then I would answer C), because Theta(n) = Theta(n + log(n)). (Because log(n) is dominated by n).
If I am formally (mathematically) correct, then I would say that none of these answers is correct, because O(n) and O(n+log(n)) only give upper bounds, but not lower bounds on the asymptotic behaviour:
Let f(n) in O(n) and g(n) in O(n + log(n)). Then there are the following contra examples:
For A): Let f(n) = n in O(n) and g(n) = 1 in O(n + log(n)). Then g(n) does not dominate f(n).
for B): Let f(n) = 1 in O(n) and g(n) = n in O(n + log(n)). Then f(n) does not dominate g(n).
for C): Let f(n) = 1 in O(n) and g(n) = n in O(n + log(n)). Then g(n) does dominate f(n).
As this would be a very tricky question, I assume that you use the more common sloppy definition, which would give the answer C). (But you might want to check your definitions for big-O).
If my answer confuses you, then you probably didn't use the formal definition and you should probably ignore my answer...
I'm studying for an exam which is mostly about the time complexity. I've encountered a problem while solving these four questions.
1) if we prove that an algorithm has a time complexity of theta(n^2), is it possible that it takes him the time calculation of O(n) for ALL inputs?
2) if we prove that an algorithm has a time complexity of theta(n^2), is it possible that it takes him the time calculation of O(n) for SOME inputs?
3) if we prove that an algorithm has a time complexity of O(n^2), is it possible that it takes him the time calculation of O(n) for SOME inputs?
4) if we prove that an algorithm has a time complexity of O(n^2), is it possible that it takes him the time calculation of O(n) for ALL inputs?
can anyone tell me how to answer such questions. I'm mostly confused when they ask for "all" or "some" inputs.
thanks
gkovacs90 answer provides a good link : WIKI
T(n) = O(n3), means T(n) grows asymptotically no faster than n3. A constant k>0 exists and for all n>N , T(n) < k*n3
T(n) = Θ(n3), means T(n) grows asymptotically as fast as n3. Two constants k1, k2 >0 exist and for all n>N , k1*n3 < T(n) < k2*n3
so if T(n) = n3 + 2*n + 3
Then T(n) = Θ(n3) is more appropriate than T(n) = O(n3) since we have more information about the way T(n) behaves asymptotically.
T(n) = Θ(n3) means that for n>N the curve of T(n) will "approach" and "stay close" to the curve of k*n3, with k>0.
T(n) = O(n3) means that for n>N the curve of T(n) will always be under to the curve of k*n3, with k>0.
1:No
2:Yes, as gkovacs90 says, for small values of n you can have O(n) time calculation but I would say No for big enough inputs. The notations Theta and Big-O only mean something asymptotically
3:Yes
4:Yes
Example for number 4 (dumm but still true) : for an Array A : Int[] compute the sum of the values. Your algorithm certainly will be :
Given A an Int Array
sum=0
for int a in A
sum = sum + a
end for
return sum
If n is the length of the array A : The time complexity is T(n) = n. So T(n) = O(n2) since T(n) will not grow faster than n2. And still we have for all array a time calculation of O(n).
If you find such a result for a time (or memory) complexity. Then you can (and certainly you must) refine the Big-O / Theta of your function (here obviously we have : Θ(n))
Some last points :
T(n)=Θ(g(n)) implies T(n)=O(g(n)).
In computational complexity theory, the complexity is sometimes computed for best, worst and average cases.
A "barfoot" explanation:
Big O notation is for setting an upper bound. By definition, there is always an index(or an input-length) from wich the notation is correct. So below this index, anything can happen.
For example sorting an array(O(n^2)) with one element takes less time, than writing the elements to the output(O(n)). ( we don't sort, we know it is in the right order, so it takes 0 time ).
So the answers:
1: No
2: Yes
3: Yes
4: Yes
You can find a detailed understandable description at WIKI
And HERE You can find a simpler explanation.
if T(n) is O(n), then it is also correct to say T(n) is O(n2) ?
Yes; because O(n) is a subset of O(n^2).
Assuming
T(n) = O(n), n > 0
Then both of the following are true
T(n) = O(2n)
T(n) = O(n2)
This is because both 2n and n2 grow as quickly as or more quickly than just plain n. EDIT: As Philip correctly notes in the comments, even a value smaller than 1 can be the multiplier of n, since constant terms may be dropped (they become insignificant for large values of n; EDIT 2: as Oli says, all constants are insignificant per the definition of O). Thus the following is also true:
T(n) = O(0.2n)
In fact, n2 grows so quickly that you can also say
T(n) = o(n2)
But not
T(n) = Θ(n2)
because the functions given provide an asymptotic upper bound, not an asymptotically tight bound.
if you mean O(2 * N) then yes O(n) == O(2n). The time taken is a linear function of the input data in both cases
I disagree with the other answer that says O(N) = O(N*N). It is true that the O(N) function will finish in less time than O(N*N), but the completion time is not a function of n*n so it really isnt true
I suppose the answer depends on why u r asking the question
O also known as Big-Oh is a upper bound. We can say that there exists a C such that, for all n > N, T(n) < C g(n). Where C is a constant.
So until an unless the large co-efficient in T(n) is smaller or equal to g(n) then that statement is always valid.