Let's say I have a function f(x) defined over a given range [a,b] for which f(x) > 0. I want to maximize f, but my algorithm can only minimize a given function.
Given the premises, is there any difference in minimizing -f(x) or 1/f(x) ?
To me, given that f(x) is always positive, there is no difference at all, as the global maximum becomes the global minimum of [a,b] in either cases.
Precision : I use a standard genetic algorithm for the process. My principal concern lies in how it explores the space depending on the function used, but so far, there seem to be no difference at all.
As mentioned in the comments, you could run into numerical issues. For example, if f(x) in range [a,b] takes up large values you could end up with rounding errors if you use 1/f(x) as opposed to -f(x). I would stick with minimizing -f(x).
Related
I've got a question about how best to classify the performance of an approximate algorithm. I'm trying to find the 'correct' value of a graph problem instance whose cost function has an objective function and a penalty function. I've configured my method such that the optimum solution of the problem has the highest value, fulfilling the objective criterion adds to the cost function value, and the penalty subtracts from it. This means that some poor solutions have negative values.
Typically the performance of an approximate algorithm is measured as:
found_cost_function_value / best_possible_solution = r .
I'm a little confused where to set my baseline of 0 however. Should I translate my objective values such that they are all positive, and so add the | most_negative_solution_cost_function_value| to all my cost function values and the best_possible_solution? Or is it best practise to calculate r as found ?
EDIT: This is an optimisation problem. Each instance can be assigned an objective value. In my setup I've got a bit of an issue in that I found it best for the execution of the algorithm to have the penalty terms much larger than my reward terms and so I have a distribution of solutions with a small head of positive terms and a large tail of grossly negative solutions which skew things if I force everyting to be between 0 <r <1.
I need a robust integration algorithm for f(x)exp(-x) between x=0 and infinity, with f(x) a positive, differentiable function.
I do not know the array x a priori (it's an intermediate output of my routine). The x array is typically ~log-equispaced, but highly irregular.
Currently, I'm using the Simpson algorithm, buy my problem is that often the domain is highly undersampled by the x array, which produces unrealistic values for the integral.
On each run of my code I need to do this integration thousands of times (each with a different set of x values), so I need to find an efficient and robust way to integrate this function.
More details:
The x array can have between 2 and N points (N known). The first value is always x[0] = 0.0. The last point is always a value greater than a tunable threshold x_max (such that exp(x_max) approx 0). I only know the values of f at the points x[i] (though the function is a smooth function).
My first idea was to do a Laguerre-Gauss quadrature integration. However, this algorithm seems to be highly unreliable when one does not use the optimal quadrature points.
My current idea is to add a set of auxiliary points, interpolating f, such that the Simpson algorithm becomes more stable. If I do this, is there an optimal selection of auxiliary points?
I'd appreciate any advice,
Thanks.
Set t=1-exp(-x), then dt = exp(-x) dx and the integral value is equal to
integral[ f(-log(1-t)) , t=0..1 ]
which you can evaluate with the standard Simpson formula and hopefully get good results.
Note that piecewise linear interpolation will always result in an order 2 error for the integral, as the result amounts to a trapezoid formula even if the method was Simpson. For better errors in the Simpson method you will need higher interpolation degrees, ideally cubic splines. Cubic Bezier polynomials with estimated derivatives to compute the control points could be a fast compromise.
I don't mean a function that generates random numbers, but an algorithm to generate a random function
"High dimension" means the function is multi-variable, e.g. a 100-dim function has 100 different variables.
Let's say the domain is [0,1], we need to generate a function f:[0,1]^n->[0,1]. This function is chosen from a certain class of functions, so that the probability of choosing any of these functions is the same.
(This class of functions can be either all continuous, or K-order derivative, whichever is convenient for the algorithm.)
Since the functions on a closed interval domain are uncountable infinite, we only require the algorithm to be pseudo-random.
Is there a polynomial time algorithm to solve this problem?
I just want to add a possible algorithm to the question(but not feasible due to its exponential time complexity). The algorithm was proposed by the friend who actually brought up this question in the first place:
The algorithm can be simply described as following. First, we assume the dimension d = 1 for example. Consider smooth functions on the interval I = [a; b]. First, we split the domain [a; b] into N small intervals. For each interval Ii, we generate a random number fi living in some specific distributions (Gaussian or uniform distribution). Finally, we do the interpolation of
series (ai; fi), where ai is a characteristic point of Ii (eg, we can choose ai as the middle point of Ii). After interpolation, we gain a smooth curve, which can be regarded as a one dimensional random function construction living in the function space Cm[a; b] (where m depends on the interpolation algorithm we choose).
This is just to say that the algorithm does not need to be that formal and rigorous, but simply to provide something that works.
So if i get it right you need function returning scalar from vector;
The easiest way I see is the use of dot product
for example let n be the dimensionality you need
so create random vector a[n] containing random coefficients in range <0,1>
and the sum of all coefficients is 1
create float a[n]
feed it with positive random numbers (no zeros)
compute the sum of a[i]
divide a[n] by this sum
now the function y=f(x[n]) is simply
y=dot(a[n],x[n])=a[0]*x[0]+a[1]*x[1]+...+a[n-1]*x[n-1]
if I didn't miss something the target range should be <0,1>
if x==(0,0,0,..0) then y=0;
if x==(1,1,1,..1) then y=1;
If you need something more complex use higher order of polynomial
something like y=dot(a0[n],x[n])*dot(a1[n],x[n]^2)*dot(a2[n],x[n]^3)...
where x[n]^2 means (x[0]*x[0],x[1]*x[1],...)
Booth approaches results in function with the same "direction"
if any x[i] rises then y rises too
if you want to change that then you have to allow also negative values for a[]
but to make that work you need to add some offset to y shifting from negative values ...
and the a[] normalization process will be a bit more complex
because you need to seek the min,max values ...
easier option is to add random flag vector m[n] to process
m[i] will flag if 1-x[i] should be used instead of x[i]
this way all above stays as is ...
you can create more types of mapping to make it even more vaiable
This might not only be hard, but impossible if you actually want to be able to generate every continuous function.
For the one-dimensional case you might be able to create a useful approximation by looking into the Faber-Schauder-System (also see wiki). This gives you a Schauder-basis for continuous functions on an interval. This kind of basis only covers the whole vectorspace if you include infinite linear combinations of basisvectors. Thus you can create some random functions by building random linear combinations from this basis, but in general you won't be able to create functions that are actually represented by an infinite amount of basisvectors this way.
Edit in response to your update:
It seems like choosing a random polynomial function of order K (for the class of K-times differentiable functions) might be sufficient for you since any of these functions can be approximated (around a given point) by one of those (see taylor's theorem). Choosing a random polynomial function is easy, since you can just pick K random real numbers as coefficients for your polynom. (Note that this will for example not return functions similar to abs(x))
Is there an algorithm to check if a given (possibly nonlinear) function f is always positive?
The idea that I currently have is to find the roots of the function (using newton-raphson algorithm or similar techniques, see http://en.wikipedia.org/wiki/Root-finding_algorithm) and check for derivatives, or finding the minimum of the f, but they don't seems to be the best solutions to this problem, also there are a lot of convergence issues with root finding algorithms.
For example, in Maple, function verify can do this, but I need to implement it in my own program.
Maple Help on verify: http://www.maplesoft.com/support/help/Maple/view.aspx?path=verify/function_shells
Maple example:
assume(x,'real');
verify(x^2+1,0,'greater_than' ); --> returns true, since for every x we have x^2+1 > 0
[edit] Some background on the question:
The function $f$ is the right hand-side differential nonlinear model for a circuit. A nonlinear circuit can be modeled as a set of ordinary differential equations by applying modified nodal analysis (MNA), for sake of simplicity, let's consider only systems with 1 dimension, so $x' = f(x)$ where $f$ describes the circuit, for example $f$ can be $f(x) = 10x - 100x^2 + 200x^3 - 300x^4 + 100x^5$ ( A model for nonlinear tunnel-diode) or $f=10 - 2sin(4x)+ 3x$ (A model for josephson junction).
$x$ is bounded and $f$ is only defined in interval $[a,b] \in R$. $f$ is continuous.
I can also make an assumption that $f$ is Lipschitz with Lipschitz constant L>0, but I don't want to unless I have to.
If I understand your problem correctly, it boils down to counting the number of (real) roots in an interval without necessarily identifying them. In fact, you don't even need to get the exact number, just whether or not it's equal to zero.
If your function is a polynomial, I think that Sturm's theorem may be applicable. The Wikipedia article claims two other procedures are preferred, so you might want to check those out, too. I'm not sure if Descartes' rule of signs works on an interval, but Budan's theorem does appear to.
Let P(x) denote the polynomial in question. The least fixed point (LFP) of P is the lowest value of x such that x=P(x). The polynomial has real coefficients. There is no guarantee in general that an LFP will exist, although one is guaranteed to exist if the degree is odd and ≥ 3. I know of an efficient solution if the degree is 3. x=P(x) thus 0=P(x)-x. There is a closed-form cubic formula, solving for x is somewhat trivial and can be hardcoded. Degrees 2 and 1 are similarly easy. It's the more complicated cases that I'm having trouble with, since I can't seem to come up with a good algorithm for arbitrary degree.
EDIT:
I'm only considering real fixed points and taking the least among them, not necessarily the fixed point with the least absolute value.
Just solve f(x) = P(x) - x using your favorite numerical method. For example, you could iterate
x_{n + 1} = x_n - P(x_n) / (P'(x_n) - 1).
You won't find closed-form formula in general because there aren't any closed-form formula for quintic and higher polynomials. Thus, for quintic and higher degree you have to use a numerical method of some sort.
Since you want the least fixed point, you can't get away without finding all real roots of P(x) - x and selecting the smallest.
Finding all the roots of a polynomial is a tricky subject. If you have a black box routine, then by all means use it. Otherwise, consider the following trick:
Form M the companion matrix of P(x) - x
Find all eigenvalues of M
but this requires you have access to a routine for finding eigenvalues (which is another tricky problem, but there are plenty of good libraries).
Otherwise, you can implement the Jenkins-Traub algorithm, which is a highly non trivial piece of code.
I don't really recommend finding a zero (with eg. Newton's method) and deflating until you reach degree one: it is very unstable if not done properly, and you'll lose a lot of accuracy (and it is very difficult to tackle multiple roots with it). The proper way do do it is in fact the above-mentioned Jenkins-Traub algorithm.
This problem is trying to find the "least" (here I'm not sure if you mean in magnitude or actually the smallest, which could be the most negative) root of a polynomial. There is no closed form solution for polynomials of large degree, but there are myriad numerical approaches to finding roots.
As is often the case, Wikipedia is a good place to begin your search.
If you want to find the smallest root, then you can use the rule of signs to pin down the interval where it exists and then use some numerical method to find roots in that interval.