Fixed-Point Iteration Mathematica - wolfram-mathematica

This is a mathematica code for fixed point iteration.
expr={1,0,9999};
f[{i_,xi_,err_}]:=(xipp=0.2062129*(20+(2*xi))^(2/5);
{i+1,xipp,Abs[(((xipp-xi)/(xipp))*100)]});
NestWhileList[f,expr,#[[3]]>=.05&]
If I were to prove this converges for all initial guesses would I use the same code and replace the function with its derivative?

Related

Fourier motzkin elimination

I have implemented fm-elim in c using matrix.
I am wondering if following modification to the original algorithm is allowed or not?
In the original version of the algorithm one takes a row with positive coeff. in front of xr and subtracts it with the one with the negative coeff to create new equations. Matrix can grow in size.
see page 32-33 http://fileadmin.cs.lth.se/cs/Education/EDAF15/F07.pdf
But is it allowed to choose one equation with negative coeff and use gauss elim to eliminate the xr.
I tried solving some small system and it seems to give me correct answer but I don't know if this method is correct or not.
With this method my matrix won't grow in size. This way I will be doing ordinary gauss elimination.

Mathematica. Integration of an oscillating function

I need help with an integral in Mathematica:
I need to calculate the integral of x^(1/4)*BesselJ[-1/4, a*x]*Cos[b*x] in the x variable (a and b are parameters) between 0 and Infinity.
The function is complicated and no analytic primitive exist, but when I tried to do it numerically with NIntegrate it did not converge. However x^(1/4)*BesselJ[-1/4, a*x] does converge (and it can be calculated analytically in fact) so the other one should converge and the problem with Mathematica must be some numerical error.

Interpolation of function with accurate values for given points

I have a series of points representing values of a function, an example is below:
The values for X and Y can be real (non-integers). The function is monotonic, non-decreasing.
I want to be able to interpolate / assess the value of the function for any X (e.g. 1.5), so that a continuous function line would look like the following:
This is a standard interpolation problem, so I used Lagrange interpolation so far. It's quite simple and gives good enough results.
The problem with interpolation is that it also interpolates the values that are given as input, so the end results are for the input data will be different (e.g x=1, x=2)
Is there an algorithm that can guarantee that all the input values will have the same value after the interpolation? Linear interpolation is one solution, but it's linear the distances between X's don't have to be even (the graph is ugly then).
Please forgive my english / math language, I am not a native speaker.
The Lagrange interpolating polynomial in fact passes through all the n points, http://mathworld.wolfram.com/LagrangeInterpolatingPolynomial.html. Although, for the 1d problem, cubic splines is a preferred interpolator.
If you rather want to fit a model, e.g., a linear, quadratic, or a cubic polynomial, or another function, to your data than I think you could still put the constraints on the coefficients to ensure the approximating function passes through some selected points. Begin by choosing the model, and then solve the Least Squares fitting problem.

Representing a simple function with 0th Bessel Function

I am sorry if I sound stupid or vague, but I have a question regarding a code about Bessel functions. I was tasked with the homework of representing a simple function, (f(x)=1-x to be exact) using the first five 0th order Bessel functions whos zeros are scaled to one. Plus I have to find their coefficients.
First off, I know I have to show that I've worked on this problem before asking for help, but I really don't know where to begin. I know about first-order Bessel functions and I know about second-order Bessel functions, but I have no idea what a 0th order Bessel function is. Plus, I didn't even know you could represent functions with Bessel functions. I know that you can use Taylor's expansion or Fourier representation to approximate a function, but I have no idea how to do that with the Bessel function. I searched this website and it seems a classmate of mine, rather rudely, just copy and pasted our assignment and thus that thread was closed.
So if some saintly person could at least point me in the right direction here, that would be wonderful. Seeing as this is a simple function and I know that Matlab has a Bessel function thing so coding it wouldn't be too hard. I just don't know how to use a method of solving a differential equation to represent another function. Oh, and the coefficients? What coefficients? Forgive my ignorance and help please!
Alright! Through much investigation and work, I have determined the answer to this problem. Now, this is actually the first time that I'm answering a question on this site, so once more, please forgive any faux pax that I might commit.
First off, it appears that I was right about the Bessel-Fourier series. It is impossible to use just the Bessel series to get a function approximation. Through a lengthy process starting with the Bessel function and scaling the x and doing a whole host of tricks that include a Fourier transform, we get can determine that the Bessel-Fourier representation of a function is given in the form of
f(x) = A1*J_0(z1x) + A2*J_0(z2x) + ....
where z1 are the zeros of a First-Order Bessel Function, J_0 is the 0th First-Order Bessel Function and A1 are the coefficients of the Fourier-Bessel series. The zeros are easily acquired by just plotting the Bessel Function, but the coefficients are the tricky parts. They are given by the equation
An = (2/(J_1(zn)^2))*integral(x*f(x)*J_0(zn*x), 0, 1)
Needless to say, the difficult part of this is getting the integral of the Bessel functions. But by using this lovely list, the process can be made simple. Messy....but simple. I should also note that the the integral is from 0 to 1 because that was the nature of my assignment. If the question was to scale it from 0 to 2, then the equation would be
An = (2/(2^2)*(J_1(zn)^2))*integral(x*f(x)*J_0(zn*x), 0, 2)
But I digress. So, since my assignment also had me graph the first five values individually, and then add them together and compare the results with the actual function, that's what I did. And thus here is the code.
%%Plotting the Bessel Functions
a=[2.40483, 5.52008, 8.65373, 11.7915, 14.9309]; %a matrix with the first 5 zeros for a first order Bessle Function
A=zeros(5,1);
F=zeros(1, 100);
X=linspace(0,1);
for i=1:1:5
A(i,1)= (2*(besselj(1,a(i))*struve(0,a(i))-besselj(0,a(i))*struve(1,a(i))))/((a(i)^2)*(besselj(1,a(i)))^2)*(3.14/2);
%the equation to determine the coefficients of the Bessel-Fourier series
end
for i=1:1:5
figure(i);
plot(X, A(i,1)*besselj(0, (a(i)*X))); %plot the first 5 Bessel functions
end
for i=1:1:5
F=F+A(i,1)*besselj(0, (a(i)*X)); %adding all the Bessel functions together
end
figure(6);
plot(X, F); %plotting the Bessel functions and 1-x
hold on
plot(X, 1-X);
%%Struve Function
function f=struve(v,x,n)
% Calculates the Struve Function
%
% struve(v,x)
% struve(v,x,n)
%
% H_v(x) is the struve function and n is the length of
% the series calculation (n=100 if unspecified)
%
% from: Abramowitz and Stegun: Handbook of Mathematical Functions
% http://www.math.sfu.ca/~cbm/aands/page_496.htm
if nargin<3
n=100;
end
k=0:n;
x=x(:)';
k=k(:);
xx=repmat(x,length(k),1);
kk=repmat(k,1,length(x));
TOP=(-1).^kk;
BOT=gamma(kk+1.5).*gamma(kk+v+1.5);
RIGHT=(xx./2).^(2.*kk+v+1);
FULL=TOP./BOT.*RIGHT;
f=sum(FULL);
And there we go. The struve function code was from this place
I hope this helped, and if I made any egregious errors, please tell me, but to be honest, I can't explain the equations up there any further as I just got them from a textbook.
Best wishes!

Why does average damping magically speed up the convergence of fixed-point calculators?

I'm reading through SICP, and the authors brush over the technique of average damping in computing the fixed points of functions. I understand that it's necessary in certain cases, ie square roots in order to damp out the oscillation of the function y = x/y however, I don't understand why it magically aids the convergence of the fixed point calculating function. Help?
edit
Obviously, I've thought this through somewhat. I can't seem to wrap my head around why averaging a function with itself would speed up convergence when applied repeatedly.
It only speeds up those functions whose repeated applications "hop around" the fixpoint. Intuitively, it's like adding a brake to a pendulum - it'll stop sooner with the brake.
But not every function has this property. Consider f(x)=x/2. This function will converge sooner without the average damping (log base 2 steps vs log base (4/3) steps), because it approaches the fixpoint from one side.
While I can't answer your question on a mathematical basis, I'll try on an intuitive one:
fixpoint techniques need a "flat" function graph around their ..well.. fixpoint. This means: if you picture your fixpoint function on an X-Y chart, you'll see that the function crosses the diagonal (+x,+y) exactly at the true result. In one step of your fixpoint algorithm you are guessing an X value which needs to be within the interval around the intersection point where the first derivative is between (-1..+1) and take the Y value. The Y that you took will be closer to the intersection point because starting from the intersection it is reachable by following a path which has a smaller slope than +/-1 , in contrast to the previous X value that you utilized, which has in this sense, the exact slope -1. It is immediately clear now that the smaller the slope, the more way you make towards the intersection point (the true function value) when using the Y as new X. The best interpolation function is trivially a constant, which has slope 0, giving you the true value in the first step.
Sorry to all mathematicians.

Resources