Given integer variable x, ranging from 0 to n. We have two functions f(x) and g(x) with the following properties:
f(x) is a strictly increasing function; with x1 > x2, we have f(x1) > f(x2)
g(x) is a strictly decreasing function; with x1 > x2, we have g(x1) < g(x2)
f(x) and g(x) are black-box functions, and have constant time complexity O(1)
The problem is to solve an optimization problem and determine optimal x:
minimize f(x) + g(x)
An easy approach is a simple linear scan to test all x from 0 to n with time complexity of O(n). I am curious if there is an approach to solve it with O(log n).
There is no such solution.
Start with f(i) = 2i. And g(i) = 2n - 2i. These meet your requirements, and the minimum is going to be 2n.
Now at one point k replace g(k) with 2n - 2k - 1. This still meets your requirements, the minimum is now going to be 2n-1, and you only can get this knowledge from asking about the kth. No amount of other questions give you any information that is different than the original one. So there is no way around asking n questions to notice a difference between the modified and original functions.
I doubt the problem in such general shape has an answer.
Let f(x)=2x for even x and 2x+1 for odd,
and g(x)=-2x-1.
Then f+g oscillates between 0 and 1 for integer arguments and every odd x is a local minimum.
And, similarly to example by #btilly, a small variation in the g(x) definition may introduce a global minimum anywhere.
I already marked one response as the solution. With certain special cases, you have to brutal force all x to get the optimal value. However, my real intention is see if there is any early stopping criteria when we observe a specific pattern. An example early stopping solution is as follows.
First we solve the boundary conditions at 0 and n, we have f(0), f(n), and g(0), g(n), with any x, we have:
f(0) < f(x) < f(n)
g(0) > g(x) > g(n)
Given two trials x and y, y > x, if we observe:
f(y) + g(y) > f(x) + g(x) // x solution is better
f(y) - f(x) > g(y) - g(n) // no more room to improve after y
Then, there is no need to test solutions after y.
Related
It is for a homework assignment and I'm just getting thrown a little bit by the negative sign.
Express the following in terms of big-O notation. Use the tightest bounds possible. For instance, n5 is technically O(n1000), but this is not as tight as O(n5).
n2 −500n−2
n2 - 500 n - 2
<= n2 - 500 n
<= n2 for all n > 0
which is O(n2)
For Big O notation what you need to remember is that it only matters for some number x0 and all numbers above that. Specifically f(x)= O(g(x)) as x approaches infinity if there is some number M and some real number x0 such that |f(x)| <= M|g(x)| for all x >= x0. (Source for equations, wikipedia).
Basically, we only need to consider large values of x and you can pick an arbitrarily large value. So large in fact that n^2 will overshadow a subtraction by 500n. To be more technical if I pick M to be 2 and x0 to be 100000000000000000. Then the above equation holds. I'm being lazy and picking an x0 that is extremely large but the equation lets me. For an M equal to 2 a much smaller value of x0 would work, but again, it doesn't matter.
Finally, your answer of O(n^2) is correct
Yes O(n^2) is correct. The negative sign should not bother you. Yes, if n = 10, then it'd be a negative number, but what if the n is sufficiently large?
E.g. see these two graphs: link - n^2 for sufficiently large n is always larger than n^2-500n-2.
In n-element array sorting processing takes;
in X algorithm: 10-8n2 sec,
in Y algoritm 10-6n log2n sec,
in Z algoritm 10-5 sec.
My question is how do i compare them. For example for y works faster according to x, Which should I choose the number of elements ?
When comparing Big-Oh notations, you ignore all constants:
N^2 has a higher growth rate than N*log(N) which still grows more quickly than O(1) [constant].
The power of N determines the growth rate.
Example:
O(n^3 + 2n + 10) > O(200n^2 + 1000n + 5000)
Ignoring the constants (as you should for pure big-Oh comparison) this reduces to:
O(n^3 + n) > O(n^2 + n)
Further reduction ignoring lower order terms yields:
O(n^3) > O(n^2)
because the power of N 3 > 2.
Big-Oh follows a hierarchy that goes something like this:
O(1) < O(log[n]) < O(n) < O(n*log[n]) < O(n^x) < O(x^n) < O(n!)
(Where x is any amount greater than 1, even the tiniest bit.)
You can compare any other expression in terms of n via some rules which I will not post here, but should be looked up in Wikipedia. I list O(n*log[n]) because it is rather common in sorting algorithms; for details regarding logarithms with different bases or different powers, check a reference source (did I mention Wikipedia?)
Give the wiki article a shot: http://en.wikipedia.org/wiki/Big_O_notation
I propose this different solution since there is not an accepted answer yet.
If you want to see at what value of n does one algorithm perform better than another, you should set the algorthim times equal to each other and solve for n.
For Example:
X = Z
10^-8 n^2 = 10^-5
n^2 = 10^3
n = sqrt(10^3)
let c = sqrt(10^3)
So when comparing X and Z, choose X if n is less than c, and Z if n is greater than c. This can be repeating between the other two pairs.
Assuming some algorithm has a polynomial time complexity T(n), is it possible for any of the terms to have a negative coefficient? Intuitively, the answer seems like an obvious "No" since there is no part of any algorithm that reduces the existing amount of time taken by previous steps but I want to be certain.
When talking about polynomial complexity, only the coefficient with the highest degree counts.
But I think you can have T(n) = n*n - n = n*(n-1). The n-1 would represent something you don't do on the first or last iteration.
Anyway, the complexity would still be n*n.
It is possible for an algorithm to have a negative coefficient in its time complexity, but overall the algorithm will have some positive time complexity. As an example from Wikipedia, take the function f(x)=6x^4-2x^3+5. They solve for the complexity of O(x^4) as follows:
for some suitable choice of x0 and M and for all x > x0. To prove this, let x0 = 1 and M = 13. Then, for all x > x0:
So,
That is, even if there are negative coefficients in the original equation, there is still some positive overall time complexity based on the term with the highest order of power.
What about for lower bounds? By definition, we can find the lower bound of any function by using the following definition: As n goes to infinity, then for some constant k and some n0 we have that the following holds for all n>n0:
Let's guess that the above function f(x) is also Omega(x^4). This means that:
6x^4 - 2x^3 + 5 >= kx^4
Solving for k:
k <= (6x^4 - 2x^3 + 5)/(x^4)
k <= 6 - 2x^-1 + 5x^-4
The term (2/x) approaches 0, as does (5/x^4) so we can choose k=2 for some large x0=30. To show that this holds, we show that:
6x^4 - 2x^3 + 5 >= 2x^4 where x > 30
4x^4 - 2x^3 + 5 >= 0
Which holds. So f(x) is Omega(x^4), and we can also conclude that we have found a tight bound such that f(x) is Theta(x^4).
Why does this work, even though the coefficient was negative? For both Big O and Big Omega notation, we are looking for a bound such that after some point one function dominates another. That is, as these graphs illustrate:
(source: Alistair.Rendell at cs.anu.edu.au)
-- Big O
(source: Alistair.Rendell at cs.anu.edu.au)
-- Big Omega
Thinking about our original f(x), 6x^4grows faster than 2x^4 (our kg(x) function). After some point, the 6x^4 term will outstrip the growth of 2x^4 in such a way that it is always greater than 2x^4. Graphically, the two functions look like this:
Despite the negative coefficient, clearly kg(x) is a lower bound of f(x).
Now, is this always true for any polynomial function with any negative coefficient--that a function f(x) with any coefficients will be bound by its highest degree polynomials? No. If the term with the highest degree has the negative coefficient, then the bounds aren't quite the same. Take f(x) = -2x^2. We can show that f(x) = O(x^2):
-2x^2 <= cx^2
-2 <= c
Which can be satisfied by any c>0 (as c is by definition a positive constant). However, if we try to do the same for lower bound:
-2x^2 >= cx^2
-2 <= c
Then we can't find the right c because again c must be non-negative.
If f(x) = O(g(x)) as x -> infinity, then
A. g is the upper bound of f
B. f is the upper bound of g.
C. g is the lower bound of f.
D. f is the lower bound of g.
Can someone please tell me when they think it is and why?
The real answer is that none of these is correct.
The definition of big-O notation is that:
|f(x)| <= k|g(x)|
for all x > x0, for some x0 and k.
In specific cases, |k| might be less than or equal to 1, in which case it would be correct to say that "|g| is the upper bound of |f|". But in general, that's not true.
Answer
g is the upper bound of f
When x goes towards infinity, worst case scenario is O(g(x)). That means actual exec time can be lower than g(x), but never worse than g(x).
EDIT:
As Oli Charlesworth pointed out, that is only true with arbitrary constant k <= 1 and not in general. Please look at his answer for the general case.
The question checks your understanding of the basics of asymptotic algebra, or big-oh notation. In
f(x) = O(g(x)) as x approaches infinity
the question says that when you feed the function f a value x, the value which f computes from x is then in the order of that returned from another function, g(x). As an example, suppose
f(x) = 2x
g(x) = x
then the value g(x) returns when fed x is of the same order as that f(x) returns for x. Specifically, the two functions return a value that is in the order of x; the functions are both linear. It doesn't matter whether f(x) is 2x or ½x; for any constant factor at all f(x) will return a value that is in the order of x. This is because big-oh notation is about ignoring constant factors. Constant factors don't grow as x grows and so we assume they don't matter nearly as much as x does.
We restrict g(x) to a specific set of functions. g(x) can be x, or ln(x), or log(x) and so on and so forth. It may look as if when
f(x) = 2x
g(x) = x
f(x) yields values higher than g(x) and therefore is the upper bound of g(x). But once again, we ignore the constant factor, and we say that the order-of upper bound, which is what big-oh is all about, is that of g(x).
if f(x) = (An) x^n + (An-1) x^(n-1) +...+ (A1)x + (A0)
how can you prove f(x) is big theta(x^n).
I've thought about it and one could do it by proving that f(x) big O(x^n) and x^n big O(f(x)). I've figured out the proof for the former (using triangle inequality) but could not understand how to do the latter.
Alternatively one could prove f(x) is big omega (x^n).
I've gotten stuck on this question and any hints or clues you could give me would greatly help.
Consider |An x^n + A(n-1) x^(n-1) + ... |/|x^n| as x -> oo.
The expression gets very close to |An| and if An is not zero, then for sufficiently large x, the expression will be at least |An|/2.
You could prove that it is both big O(x^n) and big Omega(x^n).
To prove f(x) is O(x^n), observe that for x >= 1, each 0 <= x^0, x^2, ... x^n <= x^n.
Hence, f(x) <= (n+1) * max(A_0 ... A_n) * x^n
But (n+1) * max(A_0 ... A_n) is a constant with respect to x, so we have our bound[*]
To prove x^n is O(f(x)) is actually quite difficult, since it isn't true unless A_n != 0. But if A_n != 0, required to prove:
x^n is O(An x^n + ... + A0 x^0)
By some theorems about limits that I can't be bothered to state, that's true iff
(1/An) x^n is O(x^n + ... + (A0/An) x^0)
which is true iff
(1/An) x^n - ... - (A0/An) x^0 is O(x^n) [**]
But now the LHS is a polynomial of the form which we just proved is O(x^n) in the first part. QED.
In practice, though, what you actually do is prove some lemmas about the big-O complexity of the sum of two functions with known big-O complexities. Then you just observe that all terms on both sides are O(x^n), and you can ignore the rest.
[*] That's a fudge, actually, since what matters is the comparison of the absolute value of the function. But for large enough x, f(x) has the same sign as A_n, so if that's
negative we just do a similar inequality the other way around.
I don't think you really need any use of the triangle inequality to "escape" the abs, because polynomial functions necessarily are monotonic outside a certain range (that is, they only have finitely many points of inflection), and when considering big-O limits we only care about what happens outside a certain range.
[**] Another fudge, really I should have written the limit constant M on the RHS, and included that when taking terms across to the LHS. OK, so this is only a sketch of a proof.