Algorithm question - algorithm

If f(x) = O(g(x)) as x -> infinity, then
A. g is the upper bound of f
B. f is the upper bound of g.
C. g is the lower bound of f.
D. f is the lower bound of g.
Can someone please tell me when they think it is and why?

The real answer is that none of these is correct.
The definition of big-O notation is that:
|f(x)| <= k|g(x)|
for all x > x0, for some x0 and k.
In specific cases, |k| might be less than or equal to 1, in which case it would be correct to say that "|g| is the upper bound of |f|". But in general, that's not true.

Answer
g is the upper bound of f
When x goes towards infinity, worst case scenario is O(g(x)). That means actual exec time can be lower than g(x), but never worse than g(x).
EDIT:
As Oli Charlesworth pointed out, that is only true with arbitrary constant k <= 1 and not in general. Please look at his answer for the general case.

The question checks your understanding of the basics of asymptotic algebra, or big-oh notation. In
f(x) = O(g(x)) as x approaches infinity
the question says that when you feed the function f a value x, the value which f computes from x is then in the order of that returned from another function, g(x). As an example, suppose
f(x) = 2x
g(x) = x
then the value g(x) returns when fed x is of the same order as that f(x) returns for x. Specifically, the two functions return a value that is in the order of x; the functions are both linear. It doesn't matter whether f(x) is 2x or ½x; for any constant factor at all f(x) will return a value that is in the order of x. This is because big-oh notation is about ignoring constant factors. Constant factors don't grow as x grows and so we assume they don't matter nearly as much as x does.
We restrict g(x) to a specific set of functions. g(x) can be x, or ln(x), or log(x) and so on and so forth. It may look as if when
f(x) = 2x
g(x) = x
f(x) yields values higher than g(x) and therefore is the upper bound of g(x). But once again, we ignore the constant factor, and we say that the order-of upper bound, which is what big-oh is all about, is that of g(x).

Related

Optimize two variable function with O(log n) time

Given integer variable x, ranging from 0 to n. We have two functions f(x) and g(x) with the following properties:
f(x) is a strictly increasing function; with x1 > x2, we have f(x1) > f(x2)
g(x) is a strictly decreasing function; with x1 > x2, we have g(x1) < g(x2)
f(x) and g(x) are black-box functions, and have constant time complexity O(1)
The problem is to solve an optimization problem and determine optimal x:
minimize f(x) + g(x)
An easy approach is a simple linear scan to test all x from 0 to n with time complexity of O(n). I am curious if there is an approach to solve it with O(log n).
There is no such solution.
Start with f(i) = 2i. And g(i) = 2n - 2i. These meet your requirements, and the minimum is going to be 2n.
Now at one point k replace g(k) with 2n - 2k - 1. This still meets your requirements, the minimum is now going to be 2n-1, and you only can get this knowledge from asking about the kth. No amount of other questions give you any information that is different than the original one. So there is no way around asking n questions to notice a difference between the modified and original functions.
I doubt the problem in such general shape has an answer.
Let f(x)=2x for even x and 2x+1 for odd,
and g(x)=-2x-1.
Then f+g oscillates between 0 and 1 for integer arguments and every odd x is a local minimum.
And, similarly to example by #btilly, a small variation in the g(x) definition may introduce a global minimum anywhere.
I already marked one response as the solution. With certain special cases, you have to brutal force all x to get the optimal value. However, my real intention is see if there is any early stopping criteria when we observe a specific pattern. An example early stopping solution is as follows.
First we solve the boundary conditions at 0 and n, we have f(0), f(n), and g(0), g(n), with any x, we have:
f(0) < f(x) < f(n)
g(0) > g(x) > g(n)
Given two trials x and y, y > x, if we observe:
f(y) + g(y) > f(x) + g(x) // x solution is better
f(y) - f(x) > g(y) - g(n) // no more room to improve after y
Then, there is no need to test solutions after y.

Is f(x) = O(g(x)) here or vice versa?

In a tutorial on the Big-O notation it is said that if T(n) = 4n2-2n+2, then T(n)=O(n2). However, we know that f(x) = O(g(x)) if there exists N and C such that |f(x)| <= C|g(x)| for all x>N.
But the thing is that n2 < 4n2-2n+2 for any n. Shouldn't we say that n2 = O(4n2-2n+2) in this case?
All of the below statements are true:
n2 ∈ O(n2)
n2 ∈ O(4n2-2n+2)
4n2-2n+2 ∈ O(4n2-2n+2)
4n2-2n+2 ∈ O(n2)
However talking about O(4n2-2n+2) does not make much sense as it's the exact same set as O(n2)*, so since the latter is simpler, there's no reason to refer to it as the former.
* For every function f such that ∃N,C: ∀x>N: |f(x)| ≤ C|4n2-2n+2|, it is also true that ∃N,C: ∀x>N: |f(x)| ≤ C|n2| and vice versa.
The thing about big-O notation and questions like this is to consider which term of the equation dominates as n (or x or other suitable variable name) gets really big. That is, which term contributes most to the overall shape of the equation graph. What term should you plot against the equation result to get the closest approximation of a straight line (approximately one-to-one correspondence).
With regards to the rest of your question, it doesn’t say that C>1. I presume C>0. As n grows, 2n + 2 becomes tiny in comparison with the squared term.
With regards as to why it’s relevant to coding: how long will your code take to run? Can you make it run faster? Which equation/code is more efficient? How big do your variables need to be (i.e. int or long choices in C). I presume if there is a "big-o" tag, there’s been at least one question on this before.

Asymptotic analysis "o" to "O" conversion

My understanding about "O" and "o" notation is that former is an upper bound and latter is a tight bound. My question is, if a function., f(n) is tight bound by some random function., say o(g(n)). Can this bound be made an upper bound i.e.,(O(g(n)) by multiplying some constant "c" such that it will be an upper bound even when n->infinity.
f ∈ O(g) says, essentially
For at least one choice of a constant k > 0, you can find a constant a
such that the inequality 0 <= f(x) <= k g(x) holds for all x > a.
Note that O(g) is the set of all functions for which this condition holds.
f ∈ o(g) says, essentially
For every choice of a constant k > 0, you can find a constant a such
that the inequality 0 <= f(x) < k g(x) holds for all x > a.
Once again, note that o(g) is a set.
From Wikipedia:
Note the difference between the earlier formal definition for the
big-O notation, and the present definition of little-o: while the
former has to be true for at least one constant M the latter must hold
for every positive constant ε, however small. In this way, little-o
notation makes a stronger statement than the corresponding big-O
notation: every function that is little-o of g is also big-O of g, but
not every function that is big-O of g is also little-o of g (for
instance g itself is not, unless it is identically zero near ∞).
This link contains good explanation:
http://www.stat.cmu.edu/~cshalizi/uADA/13/lectures/app-b.pdf

Asymptotic Upper Bounds vs Tight Bounds

I've come across in CLRS (Introduction to Algorithms) a sentence which states
"Distinguishing asymptotic Upper Bounds from asymptotically tight bounds is standard in the algorithms literature"
While I understand the essence of what the text wants to convey, It would be better understood if I get an example illustrating the difference.
O-notation gives us asymptotic upper bound.
Consider a function f(x),
We can define a function g(x), such that f(x) = O(g(x)).
Here g(x) is the asymptotic upper bound of f(x), meaning for all values of x >= c, f(x) grows at the same rate or slower than g(x) as x increases.
Another thing to be noticed is that if h(x) is the asymptotic upper bound of g(x) then it could easily be concluded that h(x) is also the asymptotic upper bound of f(x). After all, if f(x) can only grow at an equal or smaller rate than g(x), it is bound to grow at an equal of smaller rate than h(x) as g(x) cannot grow at any faster than h(x).
Eg. if f(x) = 10x + 2,
g(x) = 12x + 1 and h(x) = 2x^2.
We can safely say that f(x) = O(g(x)), g(x) = O(h(x)) and f(x) = O(h(x)).
Here g(x) is said to be asymptotic tight upper bound and h(x) is said to be the asymptotic upper bound of f(x).

What is the mathematical definition of f(n) and O(f(n))

Can someone please give the mathematical definition of f(n) and O(f(n))?
You can check this page to see a math definition of big-O notation.
Let f and g be two functions defined on some subset of the real numbers. One writes
if and only if there is a positive constant M such that for all sufficiently large values of x, the absolute value of f(x) is at most M multiplied by the absolute value of g(x). That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0 such that
In many contexts, the assumption that we are interested in the growth rate as the variable x goes to infinity is left unstated, and one writes more simply that f(x) = O(g(x)).

Resources