What algorithm is used to implement HermiteH function (mathematica) - algorithm

I need to port a numerical simulation written in Wolfram Mathematica to another language. The part that is giving me trouble is that the code is calling the HermiteH function with a non-integral order (the parameter n is a fractional number, not an integer), which I'm guessing is some extension to Hermite polynomials. What algorithm can be used to implement this function and what does it actually calculate when given a non-integral order?
(I do know how to implement hermite polynomials for integral orders)

http://www.maplesoft.com/support/help/maple/view.aspx?path=HermiteH
For n different from a non-negative integer, the analytic extension of the Hermite polynomial is given by
where KummerM is a Kummer's function (of the first kind) M and Γ is a gamma function

Related

Maximizing a function by minimizing its inverse

Let's say I have a function f(x) defined over a given range [a,b] for which f(x) > 0. I want to maximize f, but my algorithm can only minimize a given function.
Given the premises, is there any difference in minimizing -f(x) or 1/f(x) ?
To me, given that f(x) is always positive, there is no difference at all, as the global maximum becomes the global minimum of [a,b] in either cases.
Precision : I use a standard genetic algorithm for the process. My principal concern lies in how it explores the space depending on the function used, but so far, there seem to be no difference at all.
As mentioned in the comments, you could run into numerical issues. For example, if f(x) in range [a,b] takes up large values you could end up with rounding errors if you use 1/f(x) as opposed to -f(x). I would stick with minimizing -f(x).

Looking for a particular algorithm for numerical integration

Consider the following differential equation
f(x) = g'(x)
I have a build a code that spits out values of the function f(x) for the variable x, where x goes from 0 to very large.
Now, I'm looking for a scheme that will analyse these values of f(x) in order to determine g(x). Does anybody have any suggestions? The main problem is that if I would calculate g(x) = Integral (f(x) * dx), then I'll end up with just a number (i.e. the area under the graph), but I need to know the actual function of g(x).
I've cross-posted this question here: https://math.stackexchange.com/questions/1326854/looking-for-a-particular-algorithm-for-numerical-integration
numerical integration always return just a number
if you do not want the number but function instead
then you can not use numerical integration for this task directly
Polynomial approach
you can use any approximation/interpolation technique to obtain a polynomial representing f(x)
then integrate as standard polynomial (just change in exponent and multiplication constant)
this is not suited for transcendent, periodical or complex shaped functions
most common approaches is use of L'Grange or Taylor series
for both you need a parser capable of returning value of f(x) for any given x
algebraic integration
this is not solvable for any f(x) because we do not know how to integrate everything
so you would need to program all the rules for integration
like per-partes,substitutions,Z or L'Place transforms
and write a solver within string/symbol paradigm
that is huge amount of work
may be there are libs or dlls that can do that
from programs like Derive or Matlab ...
[edit1] As the function f(x) is just a table in form
double f[][2]={ x1,f(x1),x2,f(x2),...xn,f(xn) };
you can create the same table for g(x)=Integral(f(x)) at interval <0,x>
so:
g(x1)=f(x1)*(x1-0)
g(x2)=f(x1)*(x1-0)+f(x2)*(x2-x1)
g(x3)=f(x1)*(x1-0)+f(x2)*(x2-x1)+f(x3)*(x3-x2)
...
this is just a table so if you want actual function you need to convert this to polynomial via L'Grange or any other interpolation...
you can also use DFFT and for the function as set of sin-waves

Algorithm to generate a (pseudo-) random high-dimensional function

I don't mean a function that generates random numbers, but an algorithm to generate a random function
"High dimension" means the function is multi-variable, e.g. a 100-dim function has 100 different variables.
Let's say the domain is [0,1], we need to generate a function f:[0,1]^n->[0,1]. This function is chosen from a certain class of functions, so that the probability of choosing any of these functions is the same.
(This class of functions can be either all continuous, or K-order derivative, whichever is convenient for the algorithm.)
Since the functions on a closed interval domain are uncountable infinite, we only require the algorithm to be pseudo-random.
Is there a polynomial time algorithm to solve this problem?
I just want to add a possible algorithm to the question(but not feasible due to its exponential time complexity). The algorithm was proposed by the friend who actually brought up this question in the first place:
The algorithm can be simply described as following. First, we assume the dimension d = 1 for example. Consider smooth functions on the interval I = [a; b]. First, we split the domain [a; b] into N small intervals. For each interval Ii, we generate a random number fi living in some specific distributions (Gaussian or uniform distribution). Finally, we do the interpolation of
series (ai; fi), where ai is a characteristic point of Ii (eg, we can choose ai as the middle point of Ii). After interpolation, we gain a smooth curve, which can be regarded as a one dimensional random function construction living in the function space Cm[a; b] (where m depends on the interpolation algorithm we choose).
This is just to say that the algorithm does not need to be that formal and rigorous, but simply to provide something that works.
So if i get it right you need function returning scalar from vector;
The easiest way I see is the use of dot product
for example let n be the dimensionality you need
so create random vector a[n] containing random coefficients in range <0,1>
and the sum of all coefficients is 1
create float a[n]
feed it with positive random numbers (no zeros)
compute the sum of a[i]
divide a[n] by this sum
now the function y=f(x[n]) is simply
y=dot(a[n],x[n])=a[0]*x[0]+a[1]*x[1]+...+a[n-1]*x[n-1]
if I didn't miss something the target range should be <0,1>
if x==(0,0,0,..0) then y=0;
if x==(1,1,1,..1) then y=1;
If you need something more complex use higher order of polynomial
something like y=dot(a0[n],x[n])*dot(a1[n],x[n]^2)*dot(a2[n],x[n]^3)...
where x[n]^2 means (x[0]*x[0],x[1]*x[1],...)
Booth approaches results in function with the same "direction"
if any x[i] rises then y rises too
if you want to change that then you have to allow also negative values for a[]
but to make that work you need to add some offset to y shifting from negative values ...
and the a[] normalization process will be a bit more complex
because you need to seek the min,max values ...
easier option is to add random flag vector m[n] to process
m[i] will flag if 1-x[i] should be used instead of x[i]
this way all above stays as is ...
you can create more types of mapping to make it even more vaiable
This might not only be hard, but impossible if you actually want to be able to generate every continuous function.
For the one-dimensional case you might be able to create a useful approximation by looking into the Faber-Schauder-System (also see wiki). This gives you a Schauder-basis for continuous functions on an interval. This kind of basis only covers the whole vectorspace if you include infinite linear combinations of basisvectors. Thus you can create some random functions by building random linear combinations from this basis, but in general you won't be able to create functions that are actually represented by an infinite amount of basisvectors this way.
Edit in response to your update:
It seems like choosing a random polynomial function of order K (for the class of K-times differentiable functions) might be sufficient for you since any of these functions can be approximated (around a given point) by one of those (see taylor's theorem). Choosing a random polynomial function is easy, since you can just pick K random real numbers as coefficients for your polynom. (Note that this will for example not return functions similar to abs(x))

Mathematica. Integration of an oscillating function

I need help with an integral in Mathematica:
I need to calculate the integral of x^(1/4)*BesselJ[-1/4, a*x]*Cos[b*x] in the x variable (a and b are parameters) between 0 and Infinity.
The function is complicated and no analytic primitive exist, but when I tried to do it numerically with NIntegrate it did not converge. However x^(1/4)*BesselJ[-1/4, a*x] does converge (and it can be calculated analytically in fact) so the other one should converge and the problem with Mathematica must be some numerical error.

calculate logarithm for very large numbers

I would like to calculate the following function for very large number (for example e^e^e^e^10) and would like to know the sign of the following term in general. I tried for some numbers, but it is negative. Is there any m0 such that for all n>m>m0, the following function is positive.
where n is greater than m.
I tied with Mathematica, but it does not compute for the above number. Should I use special package?
thanks

Resources