Supposed that i have
fu3 = g.^5.*qQd./(exp(g.^2/T(i)));
I_value = (1/T^3)*trapz(g,fu3);
so, that depends of T and s ( included in qQd ). This calculation take a several minutes. So i want to fit this expression into a easy polynomial expression. I Think the name is fit or interpolation. I want to find a polynomial expression that satisfy my integral of I_value. Any help ?
What's your integration range? Let me assume it is pretty large. Then your integrand function varies by many orders of magnitude (did you plot it? probably you need logarithmic scale to understand the full extent of the problem). This is the cause for trapezoidal integration taking minutes. Replacing the integrand by a polynomial is not an option here.
Solution 1: Solve the problem analytically (partial integration, and substitution g^2->u).
Solution 2: If nevertheless you want to do it numerically, use a better integration scheme than equidistant trapezoids. You need an integrator that adapts step width according to magnitude variations of the integrand.
Related
I solved an analytically unsolvable problem with numerical methods. I am searching for X, based on a desired Y value. f(x)=y is possible, x=f^-1(y) is not.
Currently the algorithm does a binary search. It starts at X=50%, calculates Y, returns Y_err=Y-Y_demand. It keeps stepping by intervals of 5% in the direction of shrinking Y_err, until Y_err changes sign, then it reduces the step, and steps in the opposite direction. This works, but it's embarassingly slow & inefficient.
Below, an example chart of x=f^-1(y). I chose one with high coefficients for the nonlinear part.
Example chart of x=f^-1(y)
It varies depending on coefficients, but always has this pseudoparabolic shape. It's of course nonlinear and even 9th order polynomial approximations don't offer satisfactory precision.
For simplicity's sake let's say the inflecton point is at X=50%, and am looking only for solutions where X>50%.
How should I proceed? I'm looking to optimise as much as possible. What are some good algorithms? Thanks.
EDIT: Thank you for pointing out that this is not in fact a binary search. I've updated the code and now have much better results by comparison.
I'm not sure if Newton's method applies here, or at least I don't know how to apply it. One-way trial and error is all I can do. When I have some more time I will try to learn and implement regula falsi.
I am trying to find how two images (let's say "image1" and "image2") match to each other.
There are several parameters calculated for each possible position of "image2" relative to "image1". And I have empirical formula which gives "score" to each position depending on those parameters.
I tried to match image pars with help of neural networks, but failed : empirical formula works much better. From this I started thinking about to improve this formula with help of genetic algorithm.
So, the question is : I have a bunch of image pairs and for each pair I know "right" match position. Is genetic algorithm can be used in such things ? Any examples ?
Suggestions and links are appreciated.
Thanks.
Basically, yes! The parameters of your score function could be the parameters that your GA is going to evolve. You may want to use a real coded genetic algorithm or evolution strategy (CMA-ES) if your parameters are in the real domain.
There exist several possible choices for crossover:
Average / Intermediate
Blend-Alpha (BLX-a)
Blend-Alpha-Beta (BLX-a-b)
Discrete
Heuristic
Local
Random Convex
Simulated Binary (SBX)
Single Point
And also some mutation operators:
Normal distributed N(0, sigma) -> e.g. with adaptation to reduce sigma over time
Uniform distributed (in some positions)
Polynomial mutation
Another metaheuristic suitable for real coded problems is particle swarm optimization (PSO).
With genetic programming you're going to evolve a formula (e.g. a tree). I'm not so sure why you mention it, maybe I still misunderstand something. Clarify your problem, just in case.
EDIT:
Okay it seems it's not the weights that you want to optimize, but the whole formula. Still, genetic algorithms can be used for this representation as well. I want to mention HeuristicLab due to its good support for genetic programming.
I assume you have a more complex problem since you want to optimize the scoring function, and still have another algorithm for optimizing the placement according to that scoring function. You could try an easy approach and generate a dataset with several positions predefined and the features calculated accordingly. Then you could formulate a classification problem and find a model that allows you to identify those positionings that are optimal.
I was wondering if anyone knows which kind of algorithm could be use in my case. I already have run the optimizer on my multivariate function and found a solution to my problem, assuming that my function is regular enough. I slightly perturbate the problem and would like to find the optimum solution which is close to my last solution. Is there any very fast algorithm in this case or should I just fallback to a regular one.
We probably need a bit more information about your problem; but since you know you're near the right solution, and if derivatives are easy to calculate, Newton-Raphson is a sensible choice, and if not, Conjugate-Gradient may make sense.
If you already have an iterative optimizer (for example, based on Powell's direction set method, or CG), why don't you use your initial solution as a starting point for the next run of your optimizer?
EDIT: due to your comment: if calculating the Jacobian or the Hessian matrix gives you performance problems, try BFGS (http://en.wikipedia.org/wiki/BFGS_method), it avoids calculation of the Hessian completely; here
http://www.alglib.net/optimization/lbfgs.php you find a (free-for-non-commercial) implementation of BFGS. A good description of the details you will here.
And don't expect to get anything from finding your initial solution with a less sophisticated algorithm.
So this is all about unconstrained optimization. If you need information about constrained optimization, I suggest you google for "SQP".
there are a bunch of algorithms for finding the roots of equations. If you know approximately where the root is, there are algorithms that will get you arbitrarily close very quickly, in ln n time or better.
One is Newton's method
another is the Bisection Method
Note that these algorithms are for single variable functions, but can be expanded to multivariate functions.
Every minimization algorithm performs better (read: perform at all) if you have a good initial guess. The initial guess for the perturbed problem will be in your case the minimum point of the non perturbed problem.
Then, you have to specify your requirements: you want speed. What accuracy do you want ? Does space efficiency matters ? Most importantly: what information do you have: only the value of the function, or do you also have the derivatives (possibly second derivatives) ?
Some background on the problem would help too. Looking for a smooth function which has been discretized will be very different than looking for hundreds of unrelated parameters.
Global information (ie. is the function convex, is there a guaranteed global minimum or many local ones, etc) can be left aside for now. If you have trouble finding the minimum point of the perturbed problem, this is something you will have to investigate though.
Answering these questions will allow us to select a particular algorithm. There are many choices (and trade-offs) for multivariate optimization.
Also, which is quicker will very much depend on the problem (rather than on the algorithm), and should be determined by experimentation.
Thought I don't know much about using computers in this capacity, I remember an article that used neuroevolutionary techniques to find "best-fit" equations relatively efficiently, given a known function complexity (linear, Nth-polynomial, exponential, logarithmic, etc) and a set of point plots. As I recall it was one of the earliest uses of what we now know as computational neuroevolution; because the functional complexity (and thus the number of terms) of the equation is known and fixed, a static neural net can be used and seeded with your closest values, then "mutated" and tested for fitness, with heuristics to make new nets closer to existing nets with high fitness. Using multithreading, many nets can be created, tested and evaluated in parallel.
I've experimented with the two ways of implementing a least-squares fit (LSF) algorithm shown here.
The first code is simply the textbook approach, as described by Wolfram's page on LSF. The second code re-arranges the equation to minimize machine errors. Both codes produce similar results for my data. I compared these results with Matlab's p=polyfit(x,y,1) function, using correlation coefficients to measure the "goodness" of fit and compare each of the 3 routines. I observed that while all 3 methods produced good results, at least for my data, Matlab's routine had the best fit (the other 2 routines had similar results to each other).
Matlab's p=polyfit(x,y,1) function uses a Vandermonde matrix, V (n x 2 matrix) and QR factorization to solve the least-squares problem. In Matlab code, it looks like:
V = [x1,1; x2,1; x3,1; ... xn,1] % this line is pseudo-code
[Q,R] = qr(V,0);
p = R\(Q'*y); % performs same as p = V\y
I'm not a mathematician, so I don't understand why it would be more accurate. Although the difference is slight, in my case I need to obtain the slope from the LSF and multiply it by a large number, so any improvement in accuracy shows up in my results.
For reasons I can't get into, I cannot use Matlab's routine in my work. So, I'm wondering if anyone has a more accurate equation-based approach recommendation I could use that is an improvement over the above two approaches, in terms of rounding errors/machine accuracy/etc.
Any comments appreciated! thanks in advance.
For a polynomial fitting, you can create a Vandermonde matrix and solve the linear system, as you already done.
Another solution is using methods like Gauss-Newton to fit the data (since the system is linear, one iteration should do fine). There are differences between the methods. One possibly reason is the Runge's phenomenon.
Does anyone have experience with algorithms for evaluating hypergeometric functions? I would be interested in general references, but I'll describe my particular problem in case someone has dealt with it.
My specific problem is evaluating a function of the form 3F2(a, b, 1; c, d; 1) where a, b, c, and d are all positive reals and c+d > a+b+1. There are many special cases that have a closed-form formula, but as far as I know there are no such formulas in general. The power series centered at zero converges at 1, but very slowly; the ratio of consecutive coefficients goes to 1 in the limit. Maybe something like Aitken acceleration would help?
I tested Aitken acceleration and it does not seem to help for this problem (nor does Richardson extrapolation). This probably means Pade approximation doesn't work either. I might have done something wrong though, so by all means try it for yourself.
I can think of two approaches.
One is to evaluate the series at some point such as z = 0.5 where convergence is rapid to get an initial value and then step forward to z = 1 by plugging the hypergeometric differential equation into an ODE solver. I don't know how well this works in practice; it might not, due to z = 1 being a singularity (if I recall correctly).
The second is to use the definition of 3F2 in terms of the Meijer G-function. The contour integral defining the Meijer G-function can be evaluated numerically by applying Gaussian or doubly-exponential quadrature to segments of the contour. This is not terribly efficient, but it should work, and it should scale to relatively high precision.
Is it correct that you want to sum a series where you know the ratio of successive terms and it is a rational function?
I think Gosper's algorithm and the rest of the tools for proving hypergeometric identities (and finding them) do exactly this, right? (See Wilf and Zielberger's A=B book online.)