Proving f(n)-g(n) is O(f(n)) - big-o

Having trouble with a homework problem on time complexity, how do you properly proof the equation. Everything I've done so far leads me to dead ends.
Question as listed:
Let f(n) and g(n) be non-negative functions such that f(n) is O(g(n)) and g(n) is
O(f(n)). Use the definition of “big Oh” to prove that f(n) − g(n) is O(f(n)).

Without outright giving you the answer to your homework, I'll rather push you to the right place.
1. Prove that f(n) = Θ(g(n)) iff g(n) = Θ(f(n))
2. http://web.cse.ohio-state.edu/~lai.1/780-class-notes/2.math.pdf
Here are some notes to read over and after that working out the proof shouldn't be hard.
Also, I'd ask this question on the math stack exchange and not stack overflow.

Related

Can f(n) be in big O and big Omega of g(n)?

I have a question about one of my homework questions. I've watched a couple videos on youtube explaining Big O, Theta, Omega etc but I do not understand what this question is asking.
What is this question asking? There is no function that exists that is less than or equals to its complexity as its upper bound and where it is greater than its omega but is a lower bound?
I am at a complete loss and pretty confused. If someone could clear up the confusion by explanation, that would be fantastic. I cannot wrap my head around it.
I believe the question is asking you to prove or disprove the statement. When it comes to asymptotic notation using the less than/equal/greater than symbols can be confusing for new learners because it kind of implies an equation between the two when really it saying an entirely different thing.
O(g(n)) is actually a set of functions that is bounded above by g(n) times some constant factor for large enough n. In math you would say f(n) ≤ O(g(n)) implies f(n) ≤ c g(n) for c>0, n>N. That is the reason ≤ is used for O. Big-Omega is defined similarly but as a lower bound. There are many functions that can satisfy an upper and lower bound which is the reason why it's defined as a set.
So it might be more clear to use set notation for this. You can express the same thing as:
f(n) ∈ O(g(n))
f(n) ∈ Ω(g(n))
So f(n) ≤ O(g(n)) means the same as f(n) = O(g(n)) which is the same as f(n) ∈ O(g(n)). And f(n) ≥ Ω(g(n)) means the same as f(n) = Ω(g(n)) which is the same as f(n) ∈ Ω(g(n)).
So what's it's really asking you to prove is whether you can have a function f(n) that is bounded above and below by g(n).
You can. This is actually the definition for Big-Theta. Ө(g(n)) is the set of all functions such that g(n) is an asymptotic upper and lower bound on those functions. In other words, h(n) = Ө(g(n)) implies c₁ g(n) ≤ h(n) ≤ c₂ g(n) for large enough n.
If f(n) = 7n^2 + 500 then a suitable upper and lower bound can be n^2 because f(n) ≥ 1*n^2 and f(n) ≤ 8*n^2 for all n > 10. Therefore f(n) ∈ Ө(n^2).

When and Where to use which asymptotic notation

I have been through this Big-Oh explanation, understanding the complexity of two loops, difference between big theta and big-oh and also through this question.
I understand that we cannot say that Big-oh is used as the worst case, Omega as Best case and theta as average case. Big-oh has its own best, worst and average cases. But how we find out that any specific algorithm belongs from Big-oh, Big-theta or Big-Omega. or how we can check that if any algorithm belongs from all of these.
A function f(n) is Big-Oh of a function g(n), written f(n) = O(g(n)), if there exist a positive constant c and a natural number n0 such that for n > n0, f(n) <= c * g(n). A function f(n) is Big-Omega of g(n), written f(n) = Omega(g(n)), if and only if g(n) = O(f(n)). A function f(n) is Theta of a function g(n), written f(n) = Theta(g(n)), if and only if f(n) = O(g(n)) and f(n) = Omega(g(n)).
To prove any of the free, you do it by showing some function(s) are Big-Oh of some other functions. To show that one function is Big-Oh of another is a difficult problem in the general case. Any form of mathematical proof may be helpful. Induction proofs in conjunction with intuition for the base cases are not uncommon. Basically, guess at values for c and n0 and see if they work. Other options involve choosing one of the two and working out a reasonable value for another.
Note that a function may not be Big-Theta of any other function, if its tightest bounds from above and below are functions with different asymptotic rates of growth. However, I think it's usually a safe bet that most functions are going to be Big-Oh of something reasonably uncomplicated, and all functions typically looked at from this perspective are at least constant-time in the best case - Omega(1).

Understanding asymptotic notation homework

This is a problem from Steven Skiena's Algorithm Design Manual book. This is FOR HOMEWORK and I am not looking for a solution. I just want to know if I understand the concept and am approaching it the right way.
Find two functions f(n) and g(n) that satisfy the following relationship. If no such f and g exist, write None.
a) f(n)=o(g(n)) and f(n)≠Θ(g(n))
So I'm reading this as g(n) is strictly (little-oh) larger than f(n) and the average is not the same. If I'm reading this correctly then my answer is:
f(n) = n^2 and g(n) = n^3
b) f(n)=Θ(g(n)) and f(n)=o(g(n))
I'm taking this to mean that f(n) is on average the same as g(n) but g(n) is also larger, so my answer is:
f(n)=n+2 and g(n)=n+10
c) f(n)=Θ(g(n)) and f(n)≠O(g(n))
f(n) is on average the same as g(n) and g(n) is not larger:
f(n)=n^2+10 and g(n)=n^2
d) f(n)=Ω(g(n)) and f(n)≠O(g(n))
g(n) is the lower bound of f(n):
f(n)=n^2+10 and g(n)=n^2
Now is my understanding of the problem correct? If not, what am I doing wrong? If it is correct, do my solutions make sense?

Big O Notation: Definition

I've been watching MIT lectures for the algorithms course and the definition for the Big O notation says
f(n) = O(g(n)) such that for some constants c and n0
0 < f(n) < c.g(n) for all n>n0
Then the instructor proceeded to give an example,
2n2=O(n3)
Now I get that Big O gives the upper bound on the function but I am confused as to what exactly does the function f(n) correspond to here? What is its significance? As per my understanding goes, g(n) is the function representing the algorithm we are trying to analyse, but what is the purpose of f(n) or as in the example 2n2?
Need some clarification on this, I've been stuck here for hours.
In the formal definition of big-O notation, the functions f(n) and g(n) are placeholders for other functions, the same way that, say, in the quadratic formula, the letters a, b, and c are placeholders for the actual coefficients in the quadratic equation.
In your example, the instructor was talking about how 2n2 = O(n3). You have a formal definition that talks about what it means, in general, for f(n) = O(g(n)) to be true. So let's pattern-match that against the math above. It looks like f(n) is the thing on the left and g(n) is the thing on the right, so in this example f(n) = 2n2 and g(n) = n3.
The previous paragraph gives a superficial explanation of what f(n) and g(n) are by just looking at one example, but it's better to talk about what they really mean. Mathematically, f(n) and g(n) really can be any functions you'd like, but typically when you're using big-O notation in the context of the analysis of algorithms, you'll usually let f(n) be the true amount of work done by the algorithm in question (or its runtime, or its space usage, or really just about anything else) and will pick g(n) to be some "nice" function that's easier to reason about. For example, it might be the case that some function you're analyzing has a true runtime, as a function of n, as 16n3 - 2n2 - 9n + 137. That would be your function f(n). Since the whole point behind big-O notation is to be able to (mathematically rigorously and safely) discard constant factors and low-order terms, we'll try to pick a g(n) that grows at the same rate as f(n) but is easier to reason about - say, g(n) = n3. So now we can try to determine whether f(n) = O(g(n)) by seeing whether we can find the constants c and n0 talked about in the formal definition of big-O notation.
So to recap:
f(n) and g(n) in the definition given are just placeholders for other functions.
In practical usage, f(n) will be the true runtime of the algorithm in question, and g(n) will be something a lot simpler that grows at the same rate.
f(n) is the function that gives you the exact values of the thing you are trying to measure (be that time, number of processor instructions, number of iterations steps, amount of memory used, whatever).
g(n) is another function that approximates the growth of f(n).
In the usual case you don't really know f(n) or it's really hard to compute. For example for time it depends on the processor speed, memory access patterns, system load, compiler optimizations and other. g(n) is usually really simple and it's easier to understand if f(n) = O(N) that if you double n you will roughly double the runtime, in the worst case. Since it's an upper bound g(n) doesn't have to be the minimum, but usually people try to avoid inflating it if it's not necessary. In your example O(n^3) is an upper bound for 2n^2, but so is O(n^2) and O(n!).

Complexity analysis - adding 2 functions

I have a homework question asks
Given f(n) is O(k(n)) and g(n) is O(k(n)), prove f(n)+g(n) is also O(k(n))
I'm not sure where to start with this, any help to guide me of how to work on this?
Try and work through it logically. f(n) increases at a linear rate. So does g(n). Therefore
O(n) + O(n) = O(2n)
When attempting to find the big O classification of a function, constants don't count.
I'll leave the rest (including the why) as an exercise for you. (Getting the full answer on SO would be cheating!)
Refer to the Rules for Big-Oh Notation.
Sum Rule: If f(n) is O(h(n)) and g(n) is O(p(n)), then f(n)+g(n) is O(h(n)+p(n)).
Using this rule for your case the complexity would be O(2k(n)), which is nothing but O(k(n)).
So, f(n) is O(g(n)) iff f(n) is less than or equal to some positive constant multiple of g(n) for arbitrarily large values of n (so this: f(n) <= cg(n) for n >= n_0). Usually, to prove something is O(g(n)), we provide some c and n_0 to show that it is true.
In your case, I would start by using that definition, so you could say f(n) <= ck(n) and g(n) <= dk(n). I don't want to totally answer the question for you, but you are basically just going to try to show that f(n)+g(n) <= tk(n).
*c, d, and t are all just arbitrary, positive constants. If you need more help, just comment, and I will gladly provide more info.

Resources