Get equation for 3d shape - curve-fitting

I have 2 arrays say X and Y. Each have 5 elements. Now for each possible combination of (X,Y) I have a Z value, so Z is a 5x5 matrix.
I am looking to find a formula e.g. z=f(x,y). Any idea about how that can be done.
I tried MS Excel surface chart, but it doesn't give any equation or curve fitting on surface charts.

in general I would suggest to use some other software like SciLab or Matlab to work on this task. These products are more computatinal mathematics than Excel.
But Excel has some built-in features that maybe will help you.
First note:
You will need to use the Add-In called "Solver". This add-in comes along with Excel, but maybe is not installed as default on your installation.
One description (there are thousands available in www) how to install that add-in you will find here:
Solver Add-in
If you are done with this, the next step is to create a sheet with the data.
I tried to generate an example shown in the picture below.
The range C5:G9holds the Matrix you want to approximate by a function.
So it's the z=f(x,y) Matrix.
The Chart beside is just the 3D-Plot of your (in this case my) original data.
Now it will become a little bit mathematical....
You need a general type of function which will be used to do the approximation.
The quality of the result is depending on how good this function is able to come close to your data.
In the example I used an approach with a 2nd order approximation (maximum quadratic terms).
My example function is z=a*x^2 + b*y^2 + c *x*y + d*x + e*y +f.
If you need more, try it with a third order term (including also x^3, y^3 , ...).
I didn't want to do this in the example, because I'm hating to type long formulas in Excel.
Typing long formula is the next step:
Now we have to fill the range C15:G19 with the values of the calculated formula. But before this, we have to define the polynomial coefficiants in range J14:J19. As a starting value, you can use just 1 for all coefficients (the picture shows the solution after running the solver)
The formula in Cell C15 is =$J$14*C$14^2+$J$15*$B15^2+$J$16*C$14*$B15+$J$17*C$14+$J$18*$B15+$J$19
It should be easy to copy it to the other cells of the Matrix.
The plot beside this is showing the result of our approximation function.
Now we have to prepare the solver. The solver needs to optimize somehow.
Therefore we need to define a function which indicates the quality of our approximation.
I used the least square value... Have a look on the www for explanations.
In the range C24:G28 I calculated the squares of the differences from our approximation function to the original data. Cell C24 has the formula =(C15-C5)^2
Now we are close to be finished. Just copy this formula to the rest of the range and than add one very important cell:
Put the sum of the range C24:G28 in Cell H29
This value is the sum of the error or better to say the difference of our approximation function to the original data points.
Nowe the most important !!!
Select Cell H29 and start the solver add-in:
This window will pop-up (sorry I have a German Excel installation on my PC)
Just fill in the value fro target cell $H$29, target value =0 and the variable cells (important) $J$14;$J$19
Press "solve" and .... tada the polynomial coefficiants have changed to fit your data with the function.
Is this, what you have been searching for ???
Kindly Regards
Axel

You may google for and try ThreeDify Excel Grapher v4.5, an excel addin that includes a 3D equation fitter with an auto-equation finder.

Related

MATLAB -Exponential (exp2) curve fitting function not giving the same output as the plot graph when using the fit values in the original equation

my brain is pickled with this one. Below are the two graphs I have plotted with exp2 function. The points do not match the curve, and this is ultimately changing my entire answer, as it is giving the wrong values out, and I cannot understand why?
enter image description here
enter image description here
Here is the code I am using, both graphs plot a concentration against time, but yet give different results:
CH4_fit = fit(Res_time, CH4_exp, 'exp2'); CH4_coeff =
coeffvalues(CH4_fit);
%Co-efficient values for exponential fitting CH4_pred
=(CH4_coeff(1)*exp(CH4_coeff(2)*Res_time)) + ...
(CH4_coeff(3)*exp(CH4_coeff(4)*Res_time)); plot(Res_time,CH4_exp, Res_time, CH4_pred);
Can I just added that the exact same data was run on different computers, and it gave the same equation co-efficients exactly (to 4.dp) and the same times, but yet still outputs different concentrations on my version? I have the R2018b, and I have just used default settings (don't know how to change anything, so I definitely haven't).

What's the name of the algorithm concerning daily planner rendering?

Is there a well-known algorithm that is able to take as input a collection of time-bound items (defined by a start time and an end time) and produce a "graphical" layout? By graphical I mean a bi-dimensional projection of those events (2d matrix, 2d space boundaries, whatever).
The output has to be bi-dimensional because the input may contain overlapping events (events beginning at the same time etc.). One dimension would be the time, of course, and the other one is an artificial one.
If we associate a vertical axis y with the time dimension and a horizontal one, x, with the artificial dimension, then I am thinking about an algorithm playing with X and Y tokens, about token requirements and tokens availability.
E.g. the algorithm used by Outlook to render the daily view of the calendar etc.
Thank you!
PS: I believe the term "projection" is not correct, because we are adding an artificial dimension :)
PPS: Maybe what I want is one of these?
These slides: http://www.cs.illinois.edu/class/fa07/cs473ug/Lectures/lecture2.pdf call that "interval partitioning" (second part of the slides - haven't found another reference to that term elsewhere) and give a proof that a greedy algorithm works: sort the items by start time; when processing an item, if you can put it in one of the "bins" already there, put it there, otherwise start a new bin and put the item there.

Excel chart smoothing algorithm

I need to smooth my "chart builder" programmically in javascript. Excel variant of doing it is pretty good but I have no suggestion of what algorithm used. I try to get VBA code of doing it by writing macros but all I have got is (expected) ActiveChart.SeriesCollection(1).Smooth = True
Is anybody know what algorithm does Microsoft Excel use to smooth chart or the way of looking its code?
UPD: for those who find this question and have the same problem i can recommend this SVG solution with javascript source code
Splines are used to do this:
http://en.wikipedia.org/wiki/Spline_interpolation
I'm not sure if Excel uses exactly this but cubic splines are often used for this kind of 'join the dots' problem. Essentially you fit a set of cubic polynomials (i.e. a set of cubic equations) through the points. Each cubic is used for one or two regions bounded by two or three points. The cubics are defined by preserving not only the value but the gradient at each point where one cubic finishes and the next one starts. Quite often the second derivative is set to zero which gives you another boundary condition and better smoothness.

What does RiBasis which is described in RenderMan mean?

I'm working on a plugin of 3ds Max. In this plugin, I export the geometry information into a .rib file which can be rendered by a RenderMan renderer. When I export a nubrs curve's data into .rib file described by RiBasis and RiCurve. I use the RtBsplineBasis in RiBasis, but I get the wrong result that the rendered curve is short than the result of 3ds Max's renderer. Then I reprint the first and the last control vertex, the curve is long enough, but its shape is a little different.Who can tell me how I get wrong result or what does RiBasis mean? How can get correct RiBasis? Thank u very much!
RiCurve draws a cubic spline. The control points do not uniquely determine the curve; you also need the basis, which is expressed as a 4x4 matrix -- one matrix give the coefficients you need for a B-spline, Bezier, Catmull-Rom, and so on, and of course you can also supply the matrix yourself for some kind of hybrid interpolant that isn't quite one of the standard 3 or 4. The basis determines the character of the spline -- whether the curve is guaranteed to go through the control points or is merely approximating, the degree of continuity, the "tension", and so on.
There is a great discussion in one of the appendices of "The RenderMan Companion," including numeric examples of how different basis matrices affect the interpolation.
It sounds like you requested a B-spline basis, which is approximating (not interpolating) and continuous in both 1st and 2nd derivatives. Maybe that's not what you had in mind. It's hard to tell, since you didn't describe the properties of the spline that you were hoping for.
As an aside, approximating an arbitrary NURBS curve with a nonrational cubic is not always going to give you an exact match. Something else to keep in mind.

What type of smoothing to use?

Not sure if this may or may not be valid here on SO, but I was hoping someone can advise of the correct algorithm to use.
I have the following RAW data.
In the image you can see "steps". Essentially I wish to get these steps, but then get a moving average of all the data between. In the following image, you can see the moving average:
However you will notice that at the "steps", the moving average decreases the gradient where I wish to keep the high vertical gradient.
Is there any smoothing technique that will take into account a large vertical "offset", but smooth the other data?
Yup, I had to do something similar with images from a spacecraft.
Simple technique #1: use a median filter with a modest width - say about 5 samples, or 7. This provides an output value that is the median of the corresponding input value and several of its immediate neighbors on either side. It will get rid of those spikes, and do a good job preserving the step edges.
The median filter is provided in all number-crunching toolkits that I know of such as Matlab, Python/Numpy, IDL etc., and libraries for compiled languages such as C++, Java (though specific names don't come to mind right now...)
Technique #2, perhaps not quite as good: Use a Savitzky-Golay smoothing filter. This works by effectively making least-square polynomial fits to the data, at each output sample, using the corresponding input sample and a neighborhood of points (much like the median filter). The SG smoother is known for being fairly good at preserving peaks and sharp transistions.
The SG filter is usually provided by most signal processing and number crunching packages, but might not be as common as the median filter.
Technique #3, the most work and requiring the most experience and judgement: Go ahead and use a smoother - moving box average, Gaussian, whatever - but then create an output that blends between the original with the smoothed data. The blend, controlled by a new data series you create, varies from all-original (blending in 0% of the smoothed) to all-smoothed (100%).
To control the blending, start with an edge detector to detect the jumps. You may want to first median-filter the data to get rid of the spikes. Then broaden (dilation in image processing jargon) or smooth and renormalize the the edge detector's output, and flip it around so it gives 0.0 at and near the jumps, and 1.0 everywhere else. Perhaps you want a smooth transition joining them. It is an art to get this right, which depends on how the data will be used - for me, it's usually images to be viewed by Humans. An automated embedded control system might work best if tweaked differently.
The main advantage of this technique is you can plug in whatever kind of smoothing filter you like. It won't have any effect where the blend control value is zero. The main disadvantage is that the jumps, the small neighborhood defined by the manipulated edge detector output, will contain noise.
I recommend first detecting the steps and then smoothing each step individually.
You know how to do the smoothing, and edge/step detection is pretty easy also (see here, for example). A typical edge detection scheme is to smooth your data and then multiply/convolute/cross-corelate it with some filter (for example the array [-1,1] that will show you where the steps are). In a mathematical context this can be viewed as studying the derivative of your plot to find inflection points (for some of the filters).
An alternative "hackish" solution would be to do a moving average but exclude outliers from the smoothing. You can decide what an outlier is by using some threshold t. In other words, for each point p with value v, take x points surrounding it and find the subset of those points which are between v - t and v + t, and take the average of these points as the new value of p.

Resources