This is a problem from Steven Skiena's Algorithm Design Manual book. This is FOR HOMEWORK and I am not looking for a solution. I just want to know if I understand the concept and am approaching it the right way.
Find two functions f(n) and g(n) that satisfy the following relationship. If no such f and g exist, write None.
a) f(n)=o(g(n)) and f(n)≠Θ(g(n))
So I'm reading this as g(n) is strictly (little-oh) larger than f(n) and the average is not the same. If I'm reading this correctly then my answer is:
f(n) = n^2 and g(n) = n^3
b) f(n)=Θ(g(n)) and f(n)=o(g(n))
I'm taking this to mean that f(n) is on average the same as g(n) but g(n) is also larger, so my answer is:
f(n)=n+2 and g(n)=n+10
c) f(n)=Θ(g(n)) and f(n)≠O(g(n))
f(n) is on average the same as g(n) and g(n) is not larger:
f(n)=n^2+10 and g(n)=n^2
d) f(n)=Ω(g(n)) and f(n)≠O(g(n))
g(n) is the lower bound of f(n):
f(n)=n^2+10 and g(n)=n^2
Now is my understanding of the problem correct? If not, what am I doing wrong? If it is correct, do my solutions make sense?
Related
I have a question about one of my homework questions. I've watched a couple videos on youtube explaining Big O, Theta, Omega etc but I do not understand what this question is asking.
What is this question asking? There is no function that exists that is less than or equals to its complexity as its upper bound and where it is greater than its omega but is a lower bound?
I am at a complete loss and pretty confused. If someone could clear up the confusion by explanation, that would be fantastic. I cannot wrap my head around it.
I believe the question is asking you to prove or disprove the statement. When it comes to asymptotic notation using the less than/equal/greater than symbols can be confusing for new learners because it kind of implies an equation between the two when really it saying an entirely different thing.
O(g(n)) is actually a set of functions that is bounded above by g(n) times some constant factor for large enough n. In math you would say f(n) ≤ O(g(n)) implies f(n) ≤ c g(n) for c>0, n>N. That is the reason ≤ is used for O. Big-Omega is defined similarly but as a lower bound. There are many functions that can satisfy an upper and lower bound which is the reason why it's defined as a set.
So it might be more clear to use set notation for this. You can express the same thing as:
f(n) ∈ O(g(n))
f(n) ∈ Ω(g(n))
So f(n) ≤ O(g(n)) means the same as f(n) = O(g(n)) which is the same as f(n) ∈ O(g(n)). And f(n) ≥ Ω(g(n)) means the same as f(n) = Ω(g(n)) which is the same as f(n) ∈ Ω(g(n)).
So what's it's really asking you to prove is whether you can have a function f(n) that is bounded above and below by g(n).
You can. This is actually the definition for Big-Theta. Ө(g(n)) is the set of all functions such that g(n) is an asymptotic upper and lower bound on those functions. In other words, h(n) = Ө(g(n)) implies c₁ g(n) ≤ h(n) ≤ c₂ g(n) for large enough n.
If f(n) = 7n^2 + 500 then a suitable upper and lower bound can be n^2 because f(n) ≥ 1*n^2 and f(n) ≤ 8*n^2 for all n > 10. Therefore f(n) ∈ Ө(n^2).
I have been through this Big-Oh explanation, understanding the complexity of two loops, difference between big theta and big-oh and also through this question.
I understand that we cannot say that Big-oh is used as the worst case, Omega as Best case and theta as average case. Big-oh has its own best, worst and average cases. But how we find out that any specific algorithm belongs from Big-oh, Big-theta or Big-Omega. or how we can check that if any algorithm belongs from all of these.
A function f(n) is Big-Oh of a function g(n), written f(n) = O(g(n)), if there exist a positive constant c and a natural number n0 such that for n > n0, f(n) <= c * g(n). A function f(n) is Big-Omega of g(n), written f(n) = Omega(g(n)), if and only if g(n) = O(f(n)). A function f(n) is Theta of a function g(n), written f(n) = Theta(g(n)), if and only if f(n) = O(g(n)) and f(n) = Omega(g(n)).
To prove any of the free, you do it by showing some function(s) are Big-Oh of some other functions. To show that one function is Big-Oh of another is a difficult problem in the general case. Any form of mathematical proof may be helpful. Induction proofs in conjunction with intuition for the base cases are not uncommon. Basically, guess at values for c and n0 and see if they work. Other options involve choosing one of the two and working out a reasonable value for another.
Note that a function may not be Big-Theta of any other function, if its tightest bounds from above and below are functions with different asymptotic rates of growth. However, I think it's usually a safe bet that most functions are going to be Big-Oh of something reasonably uncomplicated, and all functions typically looked at from this perspective are at least constant-time in the best case - Omega(1).
Having trouble with a homework problem on time complexity, how do you properly proof the equation. Everything I've done so far leads me to dead ends.
Question as listed:
Let f(n) and g(n) be non-negative functions such that f(n) is O(g(n)) and g(n) is
O(f(n)). Use the definition of “big Oh” to prove that f(n) − g(n) is O(f(n)).
Without outright giving you the answer to your homework, I'll rather push you to the right place.
1. Prove that f(n) = Θ(g(n)) iff g(n) = Θ(f(n))
2. http://web.cse.ohio-state.edu/~lai.1/780-class-notes/2.math.pdf
Here are some notes to read over and after that working out the proof shouldn't be hard.
Also, I'd ask this question on the math stack exchange and not stack overflow.
I have a homework question asks
Given f(n) is O(k(n)) and g(n) is O(k(n)), prove f(n)+g(n) is also O(k(n))
I'm not sure where to start with this, any help to guide me of how to work on this?
Try and work through it logically. f(n) increases at a linear rate. So does g(n). Therefore
O(n) + O(n) = O(2n)
When attempting to find the big O classification of a function, constants don't count.
I'll leave the rest (including the why) as an exercise for you. (Getting the full answer on SO would be cheating!)
Refer to the Rules for Big-Oh Notation.
Sum Rule: If f(n) is O(h(n)) and g(n) is O(p(n)), then f(n)+g(n) is O(h(n)+p(n)).
Using this rule for your case the complexity would be O(2k(n)), which is nothing but O(k(n)).
So, f(n) is O(g(n)) iff f(n) is less than or equal to some positive constant multiple of g(n) for arbitrarily large values of n (so this: f(n) <= cg(n) for n >= n_0). Usually, to prove something is O(g(n)), we provide some c and n_0 to show that it is true.
In your case, I would start by using that definition, so you could say f(n) <= ck(n) and g(n) <= dk(n). I don't want to totally answer the question for you, but you are basically just going to try to show that f(n)+g(n) <= tk(n).
*c, d, and t are all just arbitrary, positive constants. If you need more help, just comment, and I will gladly provide more info.
I need help in this question. I really don't understand how to do it.
Show, either mathematically or by an example, that if f(n) is O(g(n)), a*f(n) is O(g(n)), for any constant a > 0.
I'll give you this. It should help you look in the right direction:
definition of O(n):
a function f(n) who satisfies f(n) <= C*n for an arbitrary constant number C and for every n above an arbitrary constant number N will be noted f(n) = O(n).
This is the formal definition for big-o notation, it should be simple to take this and turn it into a solution.