I have to compare the function f and g to find whether:
f ∈ Θ(g), f ∈ O(g),
f ∈ o(g), f ∈ Ω(g),
f ∈ ω(g), g ∈ Θ(f),
g ∈ O(f), g ∈ o(f),
g ∈ Ω(f), g ∈ ω(f).
f(n) = n^2/ log n and g(n) = n log n.
As my understanding for the asymptotic analysis I got that:
f(n) = O(g(n)) is like a =< b
f(n) = Ω(g(n)) is like a => b
f(n) = Θ(g(n)) is like a == b
f(n) = o(g(n)) is like a < b
f(n) = ω(g(n)) is like a > b
So now I plot the f(n) and g(n) and I saw that for small values of n, f(n) is bigger but for very large values of n the g(n) is bigger, so in this sense when the n is bigger is more important since the algorithms have to be general, so this means that the f(n) and g(n) are:
f ∈ Ω(g) and f ∈ ω(g) and g ∈ O(f) and g ∈ o(f)
Now my question is this the right way to find these so to plot the function and see which is bigger and does that mean that when they intersect they are equal?
Your understanding looks wrong in these two cases:
f(n) = o(g(n)) is like a < b
f(n) = ω(g(n)) is like a > b
It's better to visualize these cases like:
f(n) = o(g(n)) is like a << b
f(n) = ω(g(n)) is like a >> b
Sorry to say, the exact mathematical meaning of these two symbols << ("much less") and >> ("much greater") is not defined.
So, in general, instead of plotting two functions you can think in terms of the limit of the ratio f(n)/g(n) of these two functions (when n goes to positive infinity):
if this limit is zero, then f = o(g)
if this limit is a positive constant, then f = Θ(g)
if this limit is a positive infinity, then f = ω(g)
Of course, it's just a hint - your answer to any question about asymptotic relationship between two functions will look much better if you use an exact mathematical definition of this relationship.
Related
I am a bit confused about how to utilize the asymptotic analysis to prove this statement. I've tried to use the definition of f = O(g) and g = O(f), namely 0<f<=c*g(n) and 0<g <= c2*f(n),however I can deduce what will happen for f(n)-g(n). Can someone help me out on this?
You can make a lot of counterexamples. A simple one is: f(n) = 2n and g(n) = n. You can see 2n \in O(n) and n \in O(2n) by definition. But, f(n) - g(n) = n that is obviously not in O(1).
You are given functions f and g such that f(n)=Ω(g(n)). Is log(f(n)^c)=Ω(log(g(n)))? (Here c is some positive constant.) You should assume that f and g are non-decreasing and always bigger than 1.
This a question in my algorithm course and i cant figure out if it's true or false or depending on the constant or depending on the functions f and g
It's straightforward to prove. As f(n) = Omega(g(n)), it means lim{n -> ∞} f(n)/g(n) > 0.
As f and g are non-decreasing and greater than 1, and log is an increasing function, lim{n -> ∞} log(f(n))/log(g(n)) > 0. Hence, log(f(n)) = Omega(log(g(n)).
On the other hand log(f(n)^c) = c log(f(n)). As c is a constant factor, log(f(n)^c) is Omega(log(g(n)) as well anf your claim is correct.
First, I point out that instead of this notation f(n) = Ω(g(n)) I use this f(n) ∈ Ω(g(n))
From Omega definition we have:
f(n) ∈ g(n) <=> ∃s,k > 0 | f(n) >= s*g(n) ∀n >= k
So for log(f(n)^c) ∈ Ω(c*log(g(n))) we can say:
∃s > 0 (s=c for easiness) | log(f(n)^c) >= c*log(g(n)) ∀n >= k
Then we have:
c*log(f(n)) >= c*log(g(n)) ∀n >= k
f(n) >= g(n) ∀n >= k
And since we know that f(n) ∈ Ω(g(n)) we can state that log(f(n)^c) ∈ Ω(c*log(g(n))) .
I am taking a class and we're reviewing time complexity information.
I understand that big o is an upper bound, and omega is a lower bound, and if those are the same then the function is theta(that bound).
Let's say I have the function f(n) = n. Can we say that is is theta(n)?
I think it is because it is O(n) and Omega(n) for C=1 for k>=1, but I wanted to ask to be sure.
Yes that is correct. It is a common definition to say that f \in \Theta(g) iff f \in \Omega(g) and f \in O(g).
Here f(n) = n and g(n) = n.
To prove both individual parts, liminf f(n)/g(n) = liminf 1 = 1 > 0 and limsup g(n)/f(n) = limsup 1 = 1 < \infty.
In particular f \in Theta(f) for all functions f.
Note however, that the notation usually uses a big \Theta, not a small one.
I have been given the problem:
f(n) are asymptotically positive functions. Prove f(n) = Θ(g(n)) iff g(n) = Θ(f(n)).
Everything I have found points to this statement being invalid. For example an answer I've come across states:
f(n) = O(g(n)) implies g(n) = O(f(n))
f(n) = O(g(n)) means g(n) grows faster than f(n). It cannot imply that f(n) grows
faster than g(n). Hence not true.
Another states:
If f(n) = O(g(n)) then O(f(n)). This is false. If f(n) = 1 and g(n) = n
for all natural numbers n, then f(n) <= g(n) for all natural numbers n, so
f(n) = O(g(n)). However, suppose g(n) = O(f(n)). Then there are natural
numbers n0 and a constant c > 0 such that n=g(n) <= cf(n) = c for all n >=
n0 which is impossible.
I understand that there are slight differences between my exact question and the examples I have found, but I've only been able to come up with solutions that do not prove it. I am correct in thinking that it is not able to be proved or am I looking over some detail?
You can start from here:
Formal Definition: f(n) = Θ (g(n)) means there are positive constants c1, c2, and k, such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ k.
Because you have that iff, you need to start from the left side and to prove the right side, and then start from the right side and prove the left side.
Left -> right
We consider that:
f(n) = Θ(g(n))
and we want to prove that
g(n) = Θ(f(n))
So, we have some positive constants c1, c2 and k such that:
0 ≤ c1*g(n) ≤ f(n) ≤ c2*g(n), for all n ≥ k
The first relation between f and g is:
c1*g(n) ≤ f(n) => g(n) ≤ 1/c1*f(n) (1)
The second relation between f and g is:
f(n) ≤ c2*g(n) => 1/c2*f(n) ≤ g(n) (2)
If we combine (1) and (2), we obtain:
1/c2*f(n) ≤ g(n) ≤ 1/c1*f(n)
If you consider c3 = 1/c2 and c4 = 1/c1, they exist and are positive (because the denominators are positive). And this is true for all n ≥ k (where k can be the same).
So, we have some positive constants c3, c4, k such that:
c3*f(n) ≤ g(n) ≤ c4*f(n), for all n ≥ k
which means that g(n) = Θ(f(n)).
Analogous for right -> left.
I'm trying to prove that this is correct for any function f and g with domain and co-domain N. I have seen it proven using limits, but apparently you can also prove it without them.
Essentially what I'm trying to prove is "If f(n) doesn't have a big-O of g(n) then g(n) must have a big-O of f(n). What I'm having trouble is trying to understand what "f doesn't have a big-O of g" means.
According to the formal definition of big-O, if f(n) = O(g(n)) then n>=N -> f(n) <= cg(n) for some N and a constant c. If f(n) != O(g(n)) I think that means there is no c that fulfills this inequality for all values of n. Yet I don't see what I can do to use that fact to prove g(n) = O(f(n)). That doesn't prove that a c' exists for g(n) <= c'f(n), which would successfully prove the question.
Not true. Let f(n) = 1 if n is odd and zero otherwise, and g(n) = 1 if n is even and zero otherwise.
To say that f is O(g) would say there is a constant C > 0 and N > 0 such that n > N implies f(n) <= C g(n). Let n = 2 * N + 1, so that n is odd. Then f(n) = 1 but g(n) = 0 so that f(n) <= C * g(n) is impossible. Thus, f is O(g) is not true.
Similarly, we can show that g is O(f) is not true.
First of all, your definition of big-O is a little bitt off. You say:
I think that means there is no c that fulfills this inequality for all values of n.
In actuality, you need to pick a value c that fulfills the inequality for any value of n.
Anyway, to answer the question:
I don't believe the statement in the question is true... Let's see if we can think of a counter-example, where f(n) ≠ O(g(n)) and g(n) ≠ O(f(n)).
note: I'm going to use n and x interchangeably, since it's easier for me to think that way.
We'd have to come up with two functions that continually cross each other as they go towards infinity. Not only that, but they'd have to continue to cross each other regardless of the constant c that we multibly them by.
So that leaves me thinking that the functions will have to alternate between two different time complexities.
Let's look at a function that alternates between y = x and y = x^2:
f(x) = .2 (x * sin(x) + x^2 * (1 - sin(x)) )
Now, if we create a similar function with a slightly offset oscillation:
g(x) = .2 (x * cos(x) + x^2 * (1 - cos(x)) )
Then these two functions will continue to cross each others' paths out to infinity.
For any number N that you select, no matter how high, there will be an x1 greater than N such that f(x) = x^2 and g(x) = x. Similarly, there will be an x2 such that g(x) = x^2 and f(x) = x.
At these points, you won't be able to choose any c1 or c2 that will ensure that f(x) < c1 * g(x) or that g(x) < c2 * f(x).
In conclusion, f(n) ≠ O(g(n)) does not imply g(n) = O(f(n)).