Mathematica. Integration of an oscillating function - wolfram-mathematica

I need help with an integral in Mathematica:
I need to calculate the integral of x^(1/4)*BesselJ[-1/4, a*x]*Cos[b*x] in the x variable (a and b are parameters) between 0 and Infinity.
The function is complicated and no analytic primitive exist, but when I tried to do it numerically with NIntegrate it did not converge. However x^(1/4)*BesselJ[-1/4, a*x] does converge (and it can be calculated analytically in fact) so the other one should converge and the problem with Mathematica must be some numerical error.

Related

optimize integral f(x)exp(-x) from x=0,infinity

I need a robust integration algorithm for f(x)exp(-x) between x=0 and infinity, with f(x) a positive, differentiable function.
I do not know the array x a priori (it's an intermediate output of my routine). The x array is typically ~log-equispaced, but highly irregular.
Currently, I'm using the Simpson algorithm, buy my problem is that often the domain is highly undersampled by the x array, which produces unrealistic values for the integral.
On each run of my code I need to do this integration thousands of times (each with a different set of x values), so I need to find an efficient and robust way to integrate this function.
More details:
The x array can have between 2 and N points (N known). The first value is always x[0] = 0.0. The last point is always a value greater than a tunable threshold x_max (such that exp(x_max) approx 0). I only know the values of f at the points x[i] (though the function is a smooth function).
My first idea was to do a Laguerre-Gauss quadrature integration. However, this algorithm seems to be highly unreliable when one does not use the optimal quadrature points.
My current idea is to add a set of auxiliary points, interpolating f, such that the Simpson algorithm becomes more stable. If I do this, is there an optimal selection of auxiliary points?
I'd appreciate any advice,
Thanks.
Set t=1-exp(-x), then dt = exp(-x) dx and the integral value is equal to
integral[ f(-log(1-t)) , t=0..1 ]
which you can evaluate with the standard Simpson formula and hopefully get good results.
Note that piecewise linear interpolation will always result in an order 2 error for the integral, as the result amounts to a trapezoid formula even if the method was Simpson. For better errors in the Simpson method you will need higher interpolation degrees, ideally cubic splines. Cubic Bezier polynomials with estimated derivatives to compute the control points could be a fast compromise.

Looking for a particular algorithm for numerical integration

Consider the following differential equation
f(x) = g'(x)
I have a build a code that spits out values of the function f(x) for the variable x, where x goes from 0 to very large.
Now, I'm looking for a scheme that will analyse these values of f(x) in order to determine g(x). Does anybody have any suggestions? The main problem is that if I would calculate g(x) = Integral (f(x) * dx), then I'll end up with just a number (i.e. the area under the graph), but I need to know the actual function of g(x).
I've cross-posted this question here: https://math.stackexchange.com/questions/1326854/looking-for-a-particular-algorithm-for-numerical-integration
numerical integration always return just a number
if you do not want the number but function instead
then you can not use numerical integration for this task directly
Polynomial approach
you can use any approximation/interpolation technique to obtain a polynomial representing f(x)
then integrate as standard polynomial (just change in exponent and multiplication constant)
this is not suited for transcendent, periodical or complex shaped functions
most common approaches is use of L'Grange or Taylor series
for both you need a parser capable of returning value of f(x) for any given x
algebraic integration
this is not solvable for any f(x) because we do not know how to integrate everything
so you would need to program all the rules for integration
like per-partes,substitutions,Z or L'Place transforms
and write a solver within string/symbol paradigm
that is huge amount of work
may be there are libs or dlls that can do that
from programs like Derive or Matlab ...
[edit1] As the function f(x) is just a table in form
double f[][2]={ x1,f(x1),x2,f(x2),...xn,f(xn) };
you can create the same table for g(x)=Integral(f(x)) at interval <0,x>
so:
g(x1)=f(x1)*(x1-0)
g(x2)=f(x1)*(x1-0)+f(x2)*(x2-x1)
g(x3)=f(x1)*(x1-0)+f(x2)*(x2-x1)+f(x3)*(x3-x2)
...
this is just a table so if you want actual function you need to convert this to polynomial via L'Grange or any other interpolation...
you can also use DFFT and for the function as set of sin-waves

What algorithm is used to implement HermiteH function (mathematica)

I need to port a numerical simulation written in Wolfram Mathematica to another language. The part that is giving me trouble is that the code is calling the HermiteH function with a non-integral order (the parameter n is a fractional number, not an integer), which I'm guessing is some extension to Hermite polynomials. What algorithm can be used to implement this function and what does it actually calculate when given a non-integral order?
(I do know how to implement hermite polynomials for integral orders)
http://www.maplesoft.com/support/help/maple/view.aspx?path=HermiteH
For n different from a non-negative integer, the analytic extension of the Hermite polynomial is given by
where KummerM is a Kummer's function (of the first kind) M and Γ is a gamma function

Interpolation of function with accurate values for given points

I have a series of points representing values of a function, an example is below:
The values for X and Y can be real (non-integers). The function is monotonic, non-decreasing.
I want to be able to interpolate / assess the value of the function for any X (e.g. 1.5), so that a continuous function line would look like the following:
This is a standard interpolation problem, so I used Lagrange interpolation so far. It's quite simple and gives good enough results.
The problem with interpolation is that it also interpolates the values that are given as input, so the end results are for the input data will be different (e.g x=1, x=2)
Is there an algorithm that can guarantee that all the input values will have the same value after the interpolation? Linear interpolation is one solution, but it's linear the distances between X's don't have to be even (the graph is ugly then).
Please forgive my english / math language, I am not a native speaker.
The Lagrange interpolating polynomial in fact passes through all the n points, http://mathworld.wolfram.com/LagrangeInterpolatingPolynomial.html. Although, for the 1d problem, cubic splines is a preferred interpolator.
If you rather want to fit a model, e.g., a linear, quadratic, or a cubic polynomial, or another function, to your data than I think you could still put the constraints on the coefficients to ensure the approximating function passes through some selected points. Begin by choosing the model, and then solve the Least Squares fitting problem.

Single Perceptron - Non-linear Evaluating function

In the case of a single perceptron - literature states that it cannot be used for seperating non-linear discriminant cases like the XOR function. This is understandable since the VC-dimension of a line (in 2-D) is 3 and so a single 2-D line cannot discriminate outputs like XOR.
However, my question is why should the evaluating function in the single perceptron be a linear-step function? Clearly if we have a non-linear evaluating function like a sigmoid, this perceptron can discriminate between the 1s and 0s of XOR. So, am I missing something here?
if we have a non-linear evaluating function like a sigmoid, this perceptron can discriminate between the 1s and 0s of XOR
That's not true at all. The criteria for discrimination is not the shape of the line (or hyperplane in higher dimensions), but rather whether the function allows linear separability.
There is no single function that produces a hyperplane capable of separating the points of the XOR function. The curve in the image separates the points, but it is not a function.
To separate the points of XOR, you'll have to use at least two lines (or any other shaped functions). This will require two separate perceptrons. Then, you could use a third perceptron to separate the intermediate results on the basis of sign.
I assume by sigmoid you don't actually mean a sigmoid, but something with a local maximum. Whereas the normal perceptron binary classifier is of the form:
f(x) = (1 if w.x+b>0 else 0)
you could have a function:
f(x) = (1 if |w.x+b|<0.5 else 0)
This certainly would work, but would be fairly artificial, in that you effectively are tailoring your model to your dataset, which is bad.
Whether the normal perceptron algorithm would converge is almost certainly out of the question, though I may be mistaken. http://en.wikipedia.org/wiki/Perceptron#Separability_and_convergence You might need to come up with a whole new way to fit the function, which sort of defeats the purpose.
Or you could just use a support vector machine, which is like perceptron, but is able to handle more complicated cases by using the kernel trick.
Old question, but i want to leave my thoughts (anyone correct me if i'm wrong).
I think you're mixed the concepts of linear model and loss or error function.
The Perceptron is by definition a linear model, so it defines a line/plane/hyperplane which you can use to separate your classes.
The standard Perceptron algorithm extract the signal of your output, giving -1 or 1:
yhat = signal(w * X + w0)
This is fine and will eventually converge if your data is linearly separable.
To improve this you can use a sigmoid to smooth the loss function in the range [-1, 1]:
yhat = -1 + 2*sigmoid(w * X + w0)
mean_squared_error = (Y - yhat)^2
Then use a numerical optimizer like Gradient Descent to minimize the error over your entire dataset. Here w0, w1, w2, ..., wn are your variables.
Now, if your original data is not linearly separable, you can transform it in a way which makes it linearly separable and then apply any linear model. This is true because the model is linear on the weights.
This is basically what models like SVM do under the hoods do to classify your non-linear data.
PS: I'm learning this stuff too, so experts don't be mad at me if i said some crap.

Resources