Algorithm domination - big-o

Studying for a test and getting this question:
Comparing two algorithms with asymptotic complexities O(n) and O(n + log(n)),
which one of the following is true?
A) O(n + log(n)) dominates O(n)
B) O(n) dominates O(n + log(n))
C) Neither algorithm dominates the other.
O(n) dominates log(n) correct? So in this do we just take o(n) from both and deduce neither dominate?

[C] is true, because of the summation property of Big-O
Summation
O(f(n)) + O(g(n)) -> O(max(f(n), g(n)))
For example: O(n^2) + O(n) = O(n^2)
In Big-O, you only care about the largest-growing function and ignore all the other additives.
Edit: originally I put [A] as an answer, I just didn't put much attention to all the options and misinterpreted the [A] option. Here is more formal proof
O(n) ~ O(n + log(n)) <=>
O(n) ~ O(n) + O(log(n)) <=>
O(n) ~ O(n).

Yes, that's correct. If runtime is the sum of several runtimes, by order of magnitude, the largest order of magnitude dominates.

Assuming that big-O notation is used in the sense of asymptotic tight bound, which really should be denoted with a big-Theta, then I would answer C), because Theta(n) = Theta(n + log(n)). (Because log(n) is dominated by n).
If I am formally (mathematically) correct, then I would say that none of these answers is correct, because O(n) and O(n+log(n)) only give upper bounds, but not lower bounds on the asymptotic behaviour:
Let f(n) in O(n) and g(n) in O(n + log(n)). Then there are the following contra examples:
For A): Let f(n) = n in O(n) and g(n) = 1 in O(n + log(n)). Then g(n) does not dominate f(n).
for B): Let f(n) = 1 in O(n) and g(n) = n in O(n + log(n)). Then f(n) does not dominate g(n).
for C): Let f(n) = 1 in O(n) and g(n) = n in O(n + log(n)). Then g(n) does dominate f(n).
As this would be a very tricky question, I assume that you use the more common sloppy definition, which would give the answer C). (But you might want to check your definitions for big-O).
If my answer confuses you, then you probably didn't use the formal definition and you should probably ignore my answer...

Related

Determining the big-O, simplification or not? [duplicate]

What is O(log(n!)) and O(n!)? I believe it is O(n log(n)) and O(n^n)? Why?
I think it has to do with Stirling's approximation, but I don't get the explanation very well.
Am I wrong about O(log(n!) = O(n log(n))? How can the math be explained in simpler terms? In reality I just want an idea of how this works.
O(n!) isn't equivalent to O(n^n). It is asymptotically less than O(n^n).
O(log(n!)) is equal to O(n log(n)). Here is one way to prove that:
Note that by using the log rule log(mn) = log(m) + log(n) we can see that:
log(n!) = log(n*(n-1)*...2*1) = log(n) + log(n-1) + ... log(2) + log(1)
Proof that O(log(n!)) ⊆ O(n log(n)):
log(n!) = log(n) + log(n-1) + ... log(2) + log(1)
Which is less than:
log(n) + log(n) + log(n) + log(n) + ... + log(n) = n*log(n)
So O(log(n!)) is a subset of O(n log(n))
Proof that O(n log(n)) ⊆ O(log(n!)):
log(n!) = log(n) + log(n-1) + ... log(2) + log(1)
Which is greater than (the left half of that expression with all (n-x) replaced by n/2:
log(n/2) + log(n/2) + ... + log(n/2) = floor(n/2)*log(floor(n/2)) ∈ O(n log(n))
So O(n log(n)) is a subset of O(log(n!)).
Since O(n log(n)) ⊆ O(log(n!)) ⊆ O(n log(n)), they are equivalent big-Oh classes.
By Stirling's approximation,
log(n!) = n log(n) - n + O(log(n))
For large n, the right side is dominated by the term n log(n). That implies that O(log(n!)) = O(n log(n)).
More formally, one definition of "Big O" is that f(x) = O(g(x)) if and only if
lim sup|f(x)/g(x)| < ∞ as x → ∞
Using Stirling's approximation, it's easy to show that log(n!) ∈ O(n log(n)) using this definition.
A similar argument applies to n!. By taking the exponential of both sides of Stirling's approximation, we find that, for large n, n! behaves asymptotically like n^(n+1) / exp(n). Since n / exp(n) → 0 as n → ∞, we can conclude that n! ∈ O(n^n) but O(n!) is not equivalent to O(n^n). There are functions in O(n^n) that are not in O(n!) (such as n^n itself).

Explanation for Big-O notation comparison in different complexity class

Why Big-O notation can not compare algorithms in the same complexity class. Please explain, I can not find any detailed explanation.
So, O(n^2) says that this algorithm requires less or equal number of operations to perform. So, when you have algorithm A which requires f(n) = 1000n^2 + 2000n + 3000 operations and algorithm B which requires g(n) = n^2 + 10^20 operations. They're both O(n^2)
For small n the first algorithm will perform better than the second one. And for big ns second algorithm looks better since it has 1 * n^2, but first has 1000 * n^2.
Also, h(n) = n is also O(n^2) and k(n) = 5 is O(n^2). So, I can say that k(n) is better than h(n) because I know how these functions look like.
Consider the case when I don't know how functions k(n) and h(n) look like. The only thing I'm given is k(n) ~ O(n^2), h(n) ~ O(n^2). Can I say which function is better? No.
Summary
You can't say which function is better because Big O notation stays for less or equal. And following is true
O(1) is O(n^2)
O(n) is O(n^2)
How to compare functions?
There is Big Omega notation which stays for greater or equal, for example f(n) = n^2 + n + 1, this function is Omega(n^2) and Omega(n) and Omega(1). When function has complexity equal to some asymptotic, Big Theta is used, so for f(n) described above we can say that:
f(n) is O(n^3)
f(n) is O(n^2)
f(n) is Omega(n^2)
f(n) is Omega(n)
f(n) is Theta(n^2) // this is only one way we can describe f(n) using theta notation
So, to compare asymptotics of functions you need to use Theta instead of Big O or Omega.

Big O complexity when two functions f(n) [O(1)] and g(n) [O(n)] are multiplied together

f(n) and g(n) represent the running time of two different algorithms. f(n) has algorithm complexity O(1), and g(n) has algorithm complexity O(n). Can we claim f(n)*g(n) has complexity O(n)? Why/Why not?
A mathematical proof:
If we want to proof that f(n)*g(n) is O(n) we must show that exists n0 and constant c such as :
f(n)*g(n) < c*n for every n>n0
We have as fact that f(n) is O(n) which means that there are c1,n1 :
f(n)<c1*n for every n>n1 (1)
and for g there are c2,n2:
g(n)<c2 for every n>n2 (2)
Now we have that, for every n>max(n1,n2) (max because we want both inequalities for f and for g to hold):
f(n)g(n)<c1*c2*n (by multiplying (1),(2))
so we proved that there is c=c1*c2 and n0=max(n1,n2) such as the below inequality holds:
f(n)g(n)<c*n -> f(n)*g(n) is O(n) for every n>n0.
O(n) is time complexity . When we multiply f(n) and g(n).The higher time complexity obtain so,algorithm complexity is O(n).

What will be the asymptotic time complexity of the following function?

I came across this problem on asymptotic complexity of a function:
The complexities of 3 functions are as follows:
f(n) = O(n)
g(n) = Big-Omega(n)
h(n) = Theta(n)
So what will be the asymptotic complexity of the resultant function [f(n).g(n)] + h(n)
I can figure out the answer to this will be Big-Omega(n) by elementary hit and trial. For example if I say f(n) = n and g(n) = n and h(n) = n. So we can say f(n) is O(n) and g(n) is Big-Omega(n) and h(n) is Theta(n). Now f(n).g(n) is n2 and this will be Big-Omega(n) but not O(n). Now adding this to h(n) is n2+n. Which also is Big-Omega(n) but not Theta(n).
But I'm not able to figure out a proper logical or mathematical proof to this. Can someone please help me out with this?
Here's an attempt at a logical explanation:
f(n) = O(n) means that f's running time is at most linear (may be constant time).
h(n) = Theta(n) means that h's running time running is linear.
g(n) = Big-Omega(n) means that g's running time is atleast linear (may be polynomial, exponential... we don't know).
Now let's analyse the best case: f(n) is constant time, g(n) is linear, h(n) is linear. what can we say about the function f(n)*g(n)+h(n)? that it's also linear.
What can we say about the worst case? nothing, as we have no clue about the behaviour of g(n) in the worst case.
So we can conclude that f(n)*g(n)+h(n) = Big-Omega(n) as this function is linear at the best case.

Big-Oh Notation

if T(n) is O(n), then it is also correct to say T(n) is O(n2) ?
Yes; because O(n) is a subset of O(n^2).
Assuming
T(n) = O(n), n > 0
Then both of the following are true
T(n) = O(2n)
T(n) = O(n2)
This is because both 2n and n2 grow as quickly as or more quickly than just plain n. EDIT: As Philip correctly notes in the comments, even a value smaller than 1 can be the multiplier of n, since constant terms may be dropped (they become insignificant for large values of n; EDIT 2: as Oli says, all constants are insignificant per the definition of O). Thus the following is also true:
T(n) = O(0.2n)
In fact, n2 grows so quickly that you can also say
T(n) = o(n2)
But not
T(n) = Θ(n2)
because the functions given provide an asymptotic upper bound, not an asymptotically tight bound.
if you mean O(2 * N) then yes O(n) == O(2n). The time taken is a linear function of the input data in both cases
I disagree with the other answer that says O(N) = O(N*N). It is true that the O(N) function will finish in less time than O(N*N), but the completion time is not a function of n*n so it really isnt true
I suppose the answer depends on why u r asking the question
O also known as Big-Oh is a upper bound. We can say that there exists a C such that, for all n > N, T(n) < C g(n). Where C is a constant.
So until an unless the large co-efficient in T(n) is smaller or equal to g(n) then that statement is always valid.

Resources