Looking for a particular algorithm for numerical integration - algorithm

Consider the following differential equation
f(x) = g'(x)
I have a build a code that spits out values of the function f(x) for the variable x, where x goes from 0 to very large.
Now, I'm looking for a scheme that will analyse these values of f(x) in order to determine g(x). Does anybody have any suggestions? The main problem is that if I would calculate g(x) = Integral (f(x) * dx), then I'll end up with just a number (i.e. the area under the graph), but I need to know the actual function of g(x).
I've cross-posted this question here: https://math.stackexchange.com/questions/1326854/looking-for-a-particular-algorithm-for-numerical-integration

numerical integration always return just a number
if you do not want the number but function instead
then you can not use numerical integration for this task directly
Polynomial approach
you can use any approximation/interpolation technique to obtain a polynomial representing f(x)
then integrate as standard polynomial (just change in exponent and multiplication constant)
this is not suited for transcendent, periodical or complex shaped functions
most common approaches is use of L'Grange or Taylor series
for both you need a parser capable of returning value of f(x) for any given x
algebraic integration
this is not solvable for any f(x) because we do not know how to integrate everything
so you would need to program all the rules for integration
like per-partes,substitutions,Z or L'Place transforms
and write a solver within string/symbol paradigm
that is huge amount of work
may be there are libs or dlls that can do that
from programs like Derive or Matlab ...
[edit1] As the function f(x) is just a table in form
double f[][2]={ x1,f(x1),x2,f(x2),...xn,f(xn) };
you can create the same table for g(x)=Integral(f(x)) at interval <0,x>
so:
g(x1)=f(x1)*(x1-0)
g(x2)=f(x1)*(x1-0)+f(x2)*(x2-x1)
g(x3)=f(x1)*(x1-0)+f(x2)*(x2-x1)+f(x3)*(x3-x2)
...
this is just a table so if you want actual function you need to convert this to polynomial via L'Grange or any other interpolation...
you can also use DFFT and for the function as set of sin-waves

Related

optimize integral f(x)exp(-x) from x=0,infinity

I need a robust integration algorithm for f(x)exp(-x) between x=0 and infinity, with f(x) a positive, differentiable function.
I do not know the array x a priori (it's an intermediate output of my routine). The x array is typically ~log-equispaced, but highly irregular.
Currently, I'm using the Simpson algorithm, buy my problem is that often the domain is highly undersampled by the x array, which produces unrealistic values for the integral.
On each run of my code I need to do this integration thousands of times (each with a different set of x values), so I need to find an efficient and robust way to integrate this function.
More details:
The x array can have between 2 and N points (N known). The first value is always x[0] = 0.0. The last point is always a value greater than a tunable threshold x_max (such that exp(x_max) approx 0). I only know the values of f at the points x[i] (though the function is a smooth function).
My first idea was to do a Laguerre-Gauss quadrature integration. However, this algorithm seems to be highly unreliable when one does not use the optimal quadrature points.
My current idea is to add a set of auxiliary points, interpolating f, such that the Simpson algorithm becomes more stable. If I do this, is there an optimal selection of auxiliary points?
I'd appreciate any advice,
Thanks.
Set t=1-exp(-x), then dt = exp(-x) dx and the integral value is equal to
integral[ f(-log(1-t)) , t=0..1 ]
which you can evaluate with the standard Simpson formula and hopefully get good results.
Note that piecewise linear interpolation will always result in an order 2 error for the integral, as the result amounts to a trapezoid formula even if the method was Simpson. For better errors in the Simpson method you will need higher interpolation degrees, ideally cubic splines. Cubic Bezier polynomials with estimated derivatives to compute the control points could be a fast compromise.

Maximizing a function by minimizing its inverse

Let's say I have a function f(x) defined over a given range [a,b] for which f(x) > 0. I want to maximize f, but my algorithm can only minimize a given function.
Given the premises, is there any difference in minimizing -f(x) or 1/f(x) ?
To me, given that f(x) is always positive, there is no difference at all, as the global maximum becomes the global minimum of [a,b] in either cases.
Precision : I use a standard genetic algorithm for the process. My principal concern lies in how it explores the space depending on the function used, but so far, there seem to be no difference at all.
As mentioned in the comments, you could run into numerical issues. For example, if f(x) in range [a,b] takes up large values you could end up with rounding errors if you use 1/f(x) as opposed to -f(x). I would stick with minimizing -f(x).

What algorithm is used to implement HermiteH function (mathematica)

I need to port a numerical simulation written in Wolfram Mathematica to another language. The part that is giving me trouble is that the code is calling the HermiteH function with a non-integral order (the parameter n is a fractional number, not an integer), which I'm guessing is some extension to Hermite polynomials. What algorithm can be used to implement this function and what does it actually calculate when given a non-integral order?
(I do know how to implement hermite polynomials for integral orders)
http://www.maplesoft.com/support/help/maple/view.aspx?path=HermiteH
For n different from a non-negative integer, the analytic extension of the Hermite polynomial is given by
where KummerM is a Kummer's function (of the first kind) M and Γ is a gamma function

algorithm for the inverse of a 2d bijective function

I want to write a function f_1(a,b) = (x,y) that approximates the inverse of f, where f(x,y) = (a,b) is a bijective function (over a specific range)
Any suggestions on how to get an efficient numerical approximation?
The programming language used is not important.
Solving f(x,y)=(a,b) for x,y is equivalent to finding the root or minimum of f(x,y)-(a,b) ( = 0) so you can use any of the standard root finding or optimization algorithms. If you are implementing this yourself, I recommend Coordinate descent because it is probably the most simple algorithm. You could also try Adaptive coordinate descent although that may be a bit harder to analyze.
If you want to find the inverse over a range, you can either compute the inverse at various points and interpolate with something like a Cubic Spline or solve the above equation whenever you want to evaluate the inverse function. Even if you solve the equation for each evaluation, it may still be helpful to precompute some values so they can be used as initial values for a solver such as Coordinate descent.
Also see Newton's method and the Bisection method
There is no 'automatic' solution that wil work for any general function. Even in the simpler case of y = f(x) it can be hard to find a suitable starting point. As an example:
y = x^2
has a nice algebraic inverse
x = sqrt(y)
but trying to approximate the sqrt function in the range [0..1] with a polynomial (for instance) sucks badly.
If your range is small enough, and your function well behaved enough, then you might get a fit using 2D splines. If this is going to work, then you should try using independant functions for x and y, i.e. use
y = Y_1(a,b) and x = X_1(a,b)
rather than the more complicated
(x,y) = F_1(a,b)

An algorithm for checking if a nonlinear function f is always positive

Is there an algorithm to check if a given (possibly nonlinear) function f is always positive?
The idea that I currently have is to find the roots of the function (using newton-raphson algorithm or similar techniques, see http://en.wikipedia.org/wiki/Root-finding_algorithm) and check for derivatives, or finding the minimum of the f, but they don't seems to be the best solutions to this problem, also there are a lot of convergence issues with root finding algorithms.
For example, in Maple, function verify can do this, but I need to implement it in my own program.
Maple Help on verify: http://www.maplesoft.com/support/help/Maple/view.aspx?path=verify/function_shells
Maple example:
assume(x,'real');
verify(x^2+1,0,'greater_than' ); --> returns true, since for every x we have x^2+1 > 0
[edit] Some background on the question:
The function $f$ is the right hand-side differential nonlinear model for a circuit. A nonlinear circuit can be modeled as a set of ordinary differential equations by applying modified nodal analysis (MNA), for sake of simplicity, let's consider only systems with 1 dimension, so $x' = f(x)$ where $f$ describes the circuit, for example $f$ can be $f(x) = 10x - 100x^2 + 200x^3 - 300x^4 + 100x^5$ ( A model for nonlinear tunnel-diode) or $f=10 - 2sin(4x)+ 3x$ (A model for josephson junction).
$x$ is bounded and $f$ is only defined in interval $[a,b] \in R$. $f$ is continuous.
I can also make an assumption that $f$ is Lipschitz with Lipschitz constant L>0, but I don't want to unless I have to.
If I understand your problem correctly, it boils down to counting the number of (real) roots in an interval without necessarily identifying them. In fact, you don't even need to get the exact number, just whether or not it's equal to zero.
If your function is a polynomial, I think that Sturm's theorem may be applicable. The Wikipedia article claims two other procedures are preferred, so you might want to check those out, too. I'm not sure if Descartes' rule of signs works on an interval, but Budan's theorem does appear to.

Resources