non linear fitting - curve-fitting

I have some experimental data and I would like to fit them to obtain my parameters using the least-square method (Levenberg-Marquardt).
I am using two non-linear equations and I am using some computational programs (Origin and Matlab).
The first is:
y=A+B*(((2*pi*x)^2+Alfa4^2)*((2*pi*x)^2+Alfa5^2))/(((2*pi*x)^2+Alfa1^2)*((2*pi*x)^2+Alfa2^2)*((2*pi*x)^2+Alfa3^2));
Non-linear equation with the parameters (Alfa1,Alfa2,Alfa3,Alfa4,Alfa5)
And the second fitting equation is:
y=((T2^2+Lc^2*(2*pi*x)^2)/(((2*pi*x)^2*(Lc^2*(2*pi*x)^2+T8))+T6^2))*A1+G;
Rational function, i.e. quadratic function on the numerator and a 4th polynomial function on the denominator
I want to fit using this two equations, but I dont know how to do it. If someone want the experimental data I can post here.
Thank you very much,
Eduardo

Related

Obtaining the functional form of a curve

The following is the plot of a curve f(r), where r is the radial coordinate, and plotted for different values of a parameter as shown:
However, I don't know the functional form of the curve and I am interested to find the same. Are there any numerical methods which can be used to find the functional form of f(r) in terms of the radial coordinate and the parameter?
I had found a solution of the problem based on the suggestion by ja72 to use the Eureqa software which churns through the data to create accurate predictive models using evolutionary search algorithm.
In the question, the different curves corresponds to different values of . So, initially I obtained the best fit equation for different values of and found that the following model equation is suitable for my purpose:
Then, I repeated the process for a large number of values of and calculated the values of the four functions for different values of and then individually fitted these four functions. The following are the results that I obtained:
N.B.: Eureqa gave several other better fitting formulas than those mentioned in the answer. But the formulas that I mentioned are sufficiently accurate for my purpose and have minimum complexity.
A blind curve fit without an underlying model is a dangerous thing.
You need to have an understanding of the physical model behind the data to create a successful fit. The reason is that if r is distance and the best fit curve uses r^0.4072 for example, that dimension raised to a decimal power bears no meaning and it hides any underlying assumptions.Like some other dimension l not included in the model, whereas only the dimensionless quantity (r/l) would make sense to raise to the decimal power.
From a function analysis standpoint
These curves are not the result of any standard math function. Well I am not that familiar with bessel functions, gamma functions and legendre polynomials. But none of the standard functions you find in a scientific calculator jumps out here.
If r is assumed to be dimensionless, then you try to match the asymptotic behavior when r -> 0 and when r -> ∞. The would be the baseline curve. To me it does not look hyperbolic, but rather close to 1/LN(1+r).
So change the variables make g=1/LN(1+r) and plot f(r) against g(r) and see what that looks like. Then try another round of curve fitting in the new curves ... and so on.
Nobody can answer this question
Nobody else could effectively answer this question but you, because a) you have the data, and b) you need to make assumptions about what region is important or not, and what is acceptable deviation.

What is the exact difference between a model and an algorithm?

What is the exact difference between a model and an algorithm?
Let us take as an example logistic regression. Is logistic regression an model or an algorithm, and why?
An algorithm is the general approach you will take. The model is what you get when you run the algorithm over your training data and what you use to make predictions on new data.
You can generate a new model with the same algorithm but with different data, or you can get a new model from the same data but with a different algorithm.
Do you like Ferrari? They have a very nice 812 Superfast model, but they also have other models. Every model is different and leads to a different behavior and experience.
Think of a model more like a mathematical description of a system. An equation that gives you a general way how to achieve your vision or an idea. For example:
is a model function that yields a straight line (see least squares linear regression).
Whereas an algorithm is a set of actions (or rules) that you need to perform in order to implement your vision. For example, the famous minimax algorithm often used in AI game players that have to choose the next move.
To finish my above idea, imagine that a Ferrari model is an already existing idea on a paper and an algorithm is a robot in a factory that performs its set of programmed actions. It is sequence of actions. This is naively speaking of course, but hopefully you get the idea.
An algorithm is a mathematical formula like linear regression for example. Linear regression (with one variable) defines a line in 2-D space. But the slope and position of the line cannot be determined unless some sample values are available to solve the equation.
This regression line can be represented mathematically as y = mx + a.
Once sample values (or training data) is applied to solve this equation, the line can be drawn in 2-D space.
This line now becomes the model with known slope (m) and intercept (a). Using this model, the value of y (label) can be determined for a given value of x (feature).

Theory on how to find the equation of a curve given a variable number of data points

I have recently started working on a project. One of the problems I ran into was converting changing accelerations into velocity. Accelerations at different points in time are provided through sensors. If you get the equation of these data points, the derivative of a certain time (x) on that equation will be the velocity.
I know how to do this on the computer, but how would I get the equation to start with? I have searched around but I have not found any existing programs that can form an equation given a set of points. In the past, I have created a neural net algorithm to form an equation, but it takes an incredibly long time to run.
If someone can link me a program or explain the process of doing this, that would be fantastic.
Sorry if this is in the wrong forum. I would post into math, but a programming background will be needed to know the realm of possibility of what a computer can do quickly.
This started out as a comment but ended up being too big.
Just to make sure you're familiar with the terminology...
Differentiation takes a function f(t) and spits out a new function f'(t) that tells you how f(t) changes with time (i.e. f'(t) gives the slope of f(t) at time t). This takes you from displacement to velocity or from velocity to acceleration.
Integreation takes a function f(t) and spits out a new function F(t) which measures the area under the function f(t) from the beginning of time up until a given point t. What's not obvious at first is that integration is actually the reverse of differentiation, a fact called the The Fundamental Theorem of Calculus. So integration takes you from acceleration to velocity or velocity to displacement.
You don't need to understand the rules of calculus to do numerical integration. The simplest (and most naive) method for integrating a function numerically is just by approximating the area by dividing it up into small slices between time points and summing the area of rectangles. This approximating sum is called a Reimann sum.
As you can see, this tends to really overshoot and undershoot certain parts of the function. A more accurate but still very simple method is the trapezoid rule, which also approximates the function with a series of slices, except the tops of the slices are straight lines between the function values rather than constant values.
Still more complicated, but yet a better approximation, is Simpson's rules, which approximates the function with parabolas between time points.
(source: tutorvista.com)
You can think of each of these methods as getting a better approximation of the integral because they each use more information about the function. The first method uses just one data point per area (a constant flat line), the second method uses two data points per area (a straight line), and the third method uses three data points per area (a parabola).
You could read up on the math behind these methods here or in the first page of this pdf.
I agree with the comments that numerical integration is probably what you want. In case you still want a function going through your data, let me further argue against doing that.
It's usually a bad idea to find a curve that goes exactly through some given points. In almost any applied math context you have to accept that there is a little noise in the inputs, and a curve going exactly through the points may be very sensitive to noise. This can produce garbage outputs. Finding a curve going exactly through a set of points is asking for overfitting to get a function that memorizes rather than understands the data, and does not generalize.
For example, take the points (0,0), (1,1), (2,4), (3,9), (4,16), (5,25), (6,36). These are seven points on y=x^2, which is fine. The value of x^2 at x=-1 is 1. Now what happens if you replace (3,9) with (2.9,9.1)? There is a sixth order polynomial passing through all 7 points,
4.66329x - 8.87063x^2 + 7.2281x^3 - 2.35108x^4 + 0.349747x^5 - 0.0194304x^6.
The value of this at x=-1 is -23.4823, very far from 1. While the curve looks ok between 0 and 2, in other examples you can see large oscillations between the data points.
Once you accept that you want an approximation, not a curve going exactly through the points, you have what is known as a regression problem. There are many types of regression. Typically, you choose a set of functions and a way to measure how well a function approximates the data. If you use a simple set of functions like lines (linear regression), you just find the best fit. If you use a more complicated family of functions, you should use regularization to penalize overly complicated functions such as high degree polynomials with large coefficients that memorize the data. If you either use a simple family or regularization, the function tends not to change much when you add or withhold a few data points, which indicates that it is a meaningful trend in the data.
Unfortunately, integrating accelerometer data to get velocity is a numerically unstable problem. For most applications, your error will diverge far too soon to get results of any practical value.
Recall that:
So:
However well you fit a function to your accelerometer data, you will still essentially be doing a piecewise interpolation of the underlying acceleration function:
Where the error terms from each integration will add!
Typically you will see wildly inaccurate results after just a few seconds.

Excel Polynomial Curve-Fitting Algorithm

What is the algorithm that Excel uses to calculate a 2nd-order polynomial regression (curve fitting)? Is there sample code or pseudo-code available?
I found a solution that returns the same formula that Excel gives:
Put together an augmented matrix of values used in a Least-Squares Parabola. See the sum equations in http://www.efunda.com/math/leastsquares/lstsqr2dcurve.cfm
Use Gaussian elimination to solve the matrix. Here is C# code that will do that http://www.codeproject.com/Tips/388179/Linear-Equation-Solver-Gaussian-Elimination-Csharp
After running that, the left-over values in the matrix (M) will equal the coefficients given in Excel.
Maybe I can find the R^2 somehow, but I don't need it for my purposes.
The polynomial trendlines in charts use least squares based on a QR decomposition method like the LINEST worksheet function ( http://support.microsoft.com/kb/828533 ). A second order or quadratic trend for given (x,y) data could be calculated using =LINEST(y,x^{1,2}).
You can call worksheet formulas from C# using the Worksheet.Evaluate method.
It depends, because there are a lot of ways to do such a thing depending on the data you supply and how important it is to have the curve pass through those points.
I'm guessing that you have many more points than you do coefficients in the polynomial (e.g. more than three points for a 2nd order curve).
If that's true, then the best you can do is least square fitting, which calculates the coefficients that minimize the mean square error between all the points and the resulting curve.
Since this is second order, my recommendation would be just create the damn second order terms and do a linear regression.
Ex. If you are doing z~second_order(x,y), it is equivalent to doing z~first_order(x,y,x^2,y^2, xy).

Accurate least-squares fit algorithm needed

I've experimented with the two ways of implementing a least-squares fit (LSF) algorithm shown here.
The first code is simply the textbook approach, as described by Wolfram's page on LSF. The second code re-arranges the equation to minimize machine errors. Both codes produce similar results for my data. I compared these results with Matlab's p=polyfit(x,y,1) function, using correlation coefficients to measure the "goodness" of fit and compare each of the 3 routines. I observed that while all 3 methods produced good results, at least for my data, Matlab's routine had the best fit (the other 2 routines had similar results to each other).
Matlab's p=polyfit(x,y,1) function uses a Vandermonde matrix, V (n x 2 matrix) and QR factorization to solve the least-squares problem. In Matlab code, it looks like:
V = [x1,1; x2,1; x3,1; ... xn,1] % this line is pseudo-code
[Q,R] = qr(V,0);
p = R\(Q'*y); % performs same as p = V\y
I'm not a mathematician, so I don't understand why it would be more accurate. Although the difference is slight, in my case I need to obtain the slope from the LSF and multiply it by a large number, so any improvement in accuracy shows up in my results.
For reasons I can't get into, I cannot use Matlab's routine in my work. So, I'm wondering if anyone has a more accurate equation-based approach recommendation I could use that is an improvement over the above two approaches, in terms of rounding errors/machine accuracy/etc.
Any comments appreciated! thanks in advance.
For a polynomial fitting, you can create a Vandermonde matrix and solve the linear system, as you already done.
Another solution is using methods like Gauss-Newton to fit the data (since the system is linear, one iteration should do fine). There are differences between the methods. One possibly reason is the Runge's phenomenon.

Resources