I am sorry if I sound stupid or vague, but I have a question regarding a code about Bessel functions. I was tasked with the homework of representing a simple function, (f(x)=1-x to be exact) using the first five 0th order Bessel functions whos zeros are scaled to one. Plus I have to find their coefficients.
First off, I know I have to show that I've worked on this problem before asking for help, but I really don't know where to begin. I know about first-order Bessel functions and I know about second-order Bessel functions, but I have no idea what a 0th order Bessel function is. Plus, I didn't even know you could represent functions with Bessel functions. I know that you can use Taylor's expansion or Fourier representation to approximate a function, but I have no idea how to do that with the Bessel function. I searched this website and it seems a classmate of mine, rather rudely, just copy and pasted our assignment and thus that thread was closed.
So if some saintly person could at least point me in the right direction here, that would be wonderful. Seeing as this is a simple function and I know that Matlab has a Bessel function thing so coding it wouldn't be too hard. I just don't know how to use a method of solving a differential equation to represent another function. Oh, and the coefficients? What coefficients? Forgive my ignorance and help please!
Alright! Through much investigation and work, I have determined the answer to this problem. Now, this is actually the first time that I'm answering a question on this site, so once more, please forgive any faux pax that I might commit.
First off, it appears that I was right about the Bessel-Fourier series. It is impossible to use just the Bessel series to get a function approximation. Through a lengthy process starting with the Bessel function and scaling the x and doing a whole host of tricks that include a Fourier transform, we get can determine that the Bessel-Fourier representation of a function is given in the form of
f(x) = A1*J_0(z1x) + A2*J_0(z2x) + ....
where z1 are the zeros of a First-Order Bessel Function, J_0 is the 0th First-Order Bessel Function and A1 are the coefficients of the Fourier-Bessel series. The zeros are easily acquired by just plotting the Bessel Function, but the coefficients are the tricky parts. They are given by the equation
An = (2/(J_1(zn)^2))*integral(x*f(x)*J_0(zn*x), 0, 1)
Needless to say, the difficult part of this is getting the integral of the Bessel functions. But by using this lovely list, the process can be made simple. Messy....but simple. I should also note that the the integral is from 0 to 1 because that was the nature of my assignment. If the question was to scale it from 0 to 2, then the equation would be
An = (2/(2^2)*(J_1(zn)^2))*integral(x*f(x)*J_0(zn*x), 0, 2)
But I digress. So, since my assignment also had me graph the first five values individually, and then add them together and compare the results with the actual function, that's what I did. And thus here is the code.
%%Plotting the Bessel Functions
a=[2.40483, 5.52008, 8.65373, 11.7915, 14.9309]; %a matrix with the first 5 zeros for a first order Bessle Function
A=zeros(5,1);
F=zeros(1, 100);
X=linspace(0,1);
for i=1:1:5
A(i,1)= (2*(besselj(1,a(i))*struve(0,a(i))-besselj(0,a(i))*struve(1,a(i))))/((a(i)^2)*(besselj(1,a(i)))^2)*(3.14/2);
%the equation to determine the coefficients of the Bessel-Fourier series
end
for i=1:1:5
figure(i);
plot(X, A(i,1)*besselj(0, (a(i)*X))); %plot the first 5 Bessel functions
end
for i=1:1:5
F=F+A(i,1)*besselj(0, (a(i)*X)); %adding all the Bessel functions together
end
figure(6);
plot(X, F); %plotting the Bessel functions and 1-x
hold on
plot(X, 1-X);
%%Struve Function
function f=struve(v,x,n)
% Calculates the Struve Function
%
% struve(v,x)
% struve(v,x,n)
%
% H_v(x) is the struve function and n is the length of
% the series calculation (n=100 if unspecified)
%
% from: Abramowitz and Stegun: Handbook of Mathematical Functions
% http://www.math.sfu.ca/~cbm/aands/page_496.htm
if nargin<3
n=100;
end
k=0:n;
x=x(:)';
k=k(:);
xx=repmat(x,length(k),1);
kk=repmat(k,1,length(x));
TOP=(-1).^kk;
BOT=gamma(kk+1.5).*gamma(kk+v+1.5);
RIGHT=(xx./2).^(2.*kk+v+1);
FULL=TOP./BOT.*RIGHT;
f=sum(FULL);
And there we go. The struve function code was from this place
I hope this helped, and if I made any egregious errors, please tell me, but to be honest, I can't explain the equations up there any further as I just got them from a textbook.
Best wishes!
Related
While I was reading the blog of Colah,
In the diagram we can clearly see that zt is going to
~ht and not rt
But the equations say otherwise. Isn’t this supposed to be zt*ht-1 And not rt*ht-1.
Please correct me if I’m wrong.
I see this is somehow old, however, if you still haven't figured it out and care, or for any other person who would end up here, the answer is that the figure and equations are consistent. Note that, the operator (x) in the diagram (the pink circle with an X in it) is the Hadamard product, which is an element-wise multiplication between two tensors of the same size. In the equations, this operator is illustrated by * (usually it is represented by a circle and a dot at its center). ~h_t is the output of the tanh operator. The tanh operator receives a linear combination of the input at time t, x_t, and the result of the Hadamard product between r_t and h_{t-1}. Note that r_t should have already been updated by passing the linear combination of x_t and h_{t-1} through a sigmoid. I hope the reset is clear.
I'm writing an algorithm for sclera detection on grayscale images and I found a formula that I cannot explain how it works. Here is the paper segment I'm trying to use:
Here it says that I should use the HSL information of the image and calculate 3 thresholds for the 3 components which I later use for thresholding. The problem is that I cannot make any sense of the notation arg{t|min| ...} as they are not explained at all in the paper.
I deduced how the sum works and that I should have a constant at the end of the computation of the sum, but what does this previosuly mentioned operator do with the constant gathered from the sum I cannot find anywhere.
I tried to search for the meaning of the arg notation, but this wikipedia page doesn't seem to give me any answers: https://en.wikipedia.org/wiki/Argument_(complex_analysis)
Here they say that the result of the operation is the angle of the complex number, however I don't have any complex numbers, therefore if I consider a real number as complex my angle will always be 0.
Can anyone explain what should this operation do?
arg in this case means the argument of the function that gives the minimum value:
e.g .
m=arg{min f(x)}
is the x value for which the function f achieves its minimum value.
It's a standard notation in image classification etc. If you look at this you will see it https://en.wikipedia.org/wiki/Maximum_a_posteriori_estimation
I have recently started working on a project. One of the problems I ran into was converting changing accelerations into velocity. Accelerations at different points in time are provided through sensors. If you get the equation of these data points, the derivative of a certain time (x) on that equation will be the velocity.
I know how to do this on the computer, but how would I get the equation to start with? I have searched around but I have not found any existing programs that can form an equation given a set of points. In the past, I have created a neural net algorithm to form an equation, but it takes an incredibly long time to run.
If someone can link me a program or explain the process of doing this, that would be fantastic.
Sorry if this is in the wrong forum. I would post into math, but a programming background will be needed to know the realm of possibility of what a computer can do quickly.
This started out as a comment but ended up being too big.
Just to make sure you're familiar with the terminology...
Differentiation takes a function f(t) and spits out a new function f'(t) that tells you how f(t) changes with time (i.e. f'(t) gives the slope of f(t) at time t). This takes you from displacement to velocity or from velocity to acceleration.
Integreation takes a function f(t) and spits out a new function F(t) which measures the area under the function f(t) from the beginning of time up until a given point t. What's not obvious at first is that integration is actually the reverse of differentiation, a fact called the The Fundamental Theorem of Calculus. So integration takes you from acceleration to velocity or velocity to displacement.
You don't need to understand the rules of calculus to do numerical integration. The simplest (and most naive) method for integrating a function numerically is just by approximating the area by dividing it up into small slices between time points and summing the area of rectangles. This approximating sum is called a Reimann sum.
As you can see, this tends to really overshoot and undershoot certain parts of the function. A more accurate but still very simple method is the trapezoid rule, which also approximates the function with a series of slices, except the tops of the slices are straight lines between the function values rather than constant values.
Still more complicated, but yet a better approximation, is Simpson's rules, which approximates the function with parabolas between time points.
(source: tutorvista.com)
You can think of each of these methods as getting a better approximation of the integral because they each use more information about the function. The first method uses just one data point per area (a constant flat line), the second method uses two data points per area (a straight line), and the third method uses three data points per area (a parabola).
You could read up on the math behind these methods here or in the first page of this pdf.
I agree with the comments that numerical integration is probably what you want. In case you still want a function going through your data, let me further argue against doing that.
It's usually a bad idea to find a curve that goes exactly through some given points. In almost any applied math context you have to accept that there is a little noise in the inputs, and a curve going exactly through the points may be very sensitive to noise. This can produce garbage outputs. Finding a curve going exactly through a set of points is asking for overfitting to get a function that memorizes rather than understands the data, and does not generalize.
For example, take the points (0,0), (1,1), (2,4), (3,9), (4,16), (5,25), (6,36). These are seven points on y=x^2, which is fine. The value of x^2 at x=-1 is 1. Now what happens if you replace (3,9) with (2.9,9.1)? There is a sixth order polynomial passing through all 7 points,
4.66329x - 8.87063x^2 + 7.2281x^3 - 2.35108x^4 + 0.349747x^5 - 0.0194304x^6.
The value of this at x=-1 is -23.4823, very far from 1. While the curve looks ok between 0 and 2, in other examples you can see large oscillations between the data points.
Once you accept that you want an approximation, not a curve going exactly through the points, you have what is known as a regression problem. There are many types of regression. Typically, you choose a set of functions and a way to measure how well a function approximates the data. If you use a simple set of functions like lines (linear regression), you just find the best fit. If you use a more complicated family of functions, you should use regularization to penalize overly complicated functions such as high degree polynomials with large coefficients that memorize the data. If you either use a simple family or regularization, the function tends not to change much when you add or withhold a few data points, which indicates that it is a meaningful trend in the data.
Unfortunately, integrating accelerometer data to get velocity is a numerically unstable problem. For most applications, your error will diverge far too soon to get results of any practical value.
Recall that:
So:
However well you fit a function to your accelerometer data, you will still essentially be doing a piecewise interpolation of the underlying acceleration function:
Where the error terms from each integration will add!
Typically you will see wildly inaccurate results after just a few seconds.
I've experimented with the two ways of implementing a least-squares fit (LSF) algorithm shown here.
The first code is simply the textbook approach, as described by Wolfram's page on LSF. The second code re-arranges the equation to minimize machine errors. Both codes produce similar results for my data. I compared these results with Matlab's p=polyfit(x,y,1) function, using correlation coefficients to measure the "goodness" of fit and compare each of the 3 routines. I observed that while all 3 methods produced good results, at least for my data, Matlab's routine had the best fit (the other 2 routines had similar results to each other).
Matlab's p=polyfit(x,y,1) function uses a Vandermonde matrix, V (n x 2 matrix) and QR factorization to solve the least-squares problem. In Matlab code, it looks like:
V = [x1,1; x2,1; x3,1; ... xn,1] % this line is pseudo-code
[Q,R] = qr(V,0);
p = R\(Q'*y); % performs same as p = V\y
I'm not a mathematician, so I don't understand why it would be more accurate. Although the difference is slight, in my case I need to obtain the slope from the LSF and multiply it by a large number, so any improvement in accuracy shows up in my results.
For reasons I can't get into, I cannot use Matlab's routine in my work. So, I'm wondering if anyone has a more accurate equation-based approach recommendation I could use that is an improvement over the above two approaches, in terms of rounding errors/machine accuracy/etc.
Any comments appreciated! thanks in advance.
For a polynomial fitting, you can create a Vandermonde matrix and solve the linear system, as you already done.
Another solution is using methods like Gauss-Newton to fit the data (since the system is linear, one iteration should do fine). There are differences between the methods. One possibly reason is the Runge's phenomenon.
Does anyone have experience with algorithms for evaluating hypergeometric functions? I would be interested in general references, but I'll describe my particular problem in case someone has dealt with it.
My specific problem is evaluating a function of the form 3F2(a, b, 1; c, d; 1) where a, b, c, and d are all positive reals and c+d > a+b+1. There are many special cases that have a closed-form formula, but as far as I know there are no such formulas in general. The power series centered at zero converges at 1, but very slowly; the ratio of consecutive coefficients goes to 1 in the limit. Maybe something like Aitken acceleration would help?
I tested Aitken acceleration and it does not seem to help for this problem (nor does Richardson extrapolation). This probably means Pade approximation doesn't work either. I might have done something wrong though, so by all means try it for yourself.
I can think of two approaches.
One is to evaluate the series at some point such as z = 0.5 where convergence is rapid to get an initial value and then step forward to z = 1 by plugging the hypergeometric differential equation into an ODE solver. I don't know how well this works in practice; it might not, due to z = 1 being a singularity (if I recall correctly).
The second is to use the definition of 3F2 in terms of the Meijer G-function. The contour integral defining the Meijer G-function can be evaluated numerically by applying Gaussian or doubly-exponential quadrature to segments of the contour. This is not terribly efficient, but it should work, and it should scale to relatively high precision.
Is it correct that you want to sum a series where you know the ratio of successive terms and it is a rational function?
I think Gosper's algorithm and the rest of the tools for proving hypergeometric identities (and finding them) do exactly this, right? (See Wilf and Zielberger's A=B book online.)