I need to implement a exponential term "exp((-x²*0.008)-0.5)" in VHDL. Here 'x' is 18-bit fixed point signal ranging between (-90.0 to 90.0). When I went through internet, one solution I found is using taylor series expansion. Is there an alternative solution which better suits my need. If using taylor series is the only solution to evaluate the 'exp' function then what should be the summation limit ? Thank you.
The summation limit should be whatever is required to meet your required accuracy. It is usually best to find that out in other tools (even a spreadsheet!) before worrying about VHDL.
For sufficiently smooth curves, a quadratic interpolation using coefficients from a small lookup table may be accurate enough, (again : evaluate in MatLab, Octave or spreadsheet) and is fairly easy to implement.
Related
Do symbolic math calculations (especially for solving nonlinear polynomial systems) cause huge performance (calculation speed) disadvantage compared to numeric calculations? Are there any benchmark/data about this?
Found a related question: Symbolic computation vs. numerical computation
Another one: Computational Efficiency of Forward Mode Automatic vs Numeric vs Symbolic Differentiation
I am the individual who answered the Scicomp question you reference in your question. I personally am not aware of any empirical metrics performed to compare run-time performance for symbolic versus numerical solutions to systems of polynomial equations.
However, it should be fairly intuitive that symbolic solutions will have a bit more overhead for most aspects of solving the problem due to things such as manipulation of terms in the equation symbolically, searching how to simplify/rearrange equations to make them easier to solve, searching through known closed form solutions, etc. One major issue with symbolic solvers is that you may not have a closed form solution you can find and use, so solving it numerically would have to happen either way.
The only way I can see symbolic solvers outperforming numerical solutions in terms of run-time is if the symbolic solver can quickly enough recognize your problem as one with a known analytical solution or if it arrives at the solution eventually while the numerical solver never does (aka it diverges).
Given you can find a numerical solver that converges, I think the numerical case will generally be much more efficient since there's just much less overhead to make progress in refining your solution. Since you mention solving systems of polynomial equations, I suspect there are also some tailored algorithms for your type of problem that may be superior to typical nonlinear equation solving schemes.
This is not a direct answer to the question but a suggested course correction.
While it is possible to evaluate math expressions in a purely numeric means or in a purely symbolic means, it is also possible to use a hybrid approach.
This is know as Symbolic-numeric computation
Maple is one software package that has this ability.
Note: I have never used Maple so I can't add more.
Searching for packages
I find I get better results when searching for math packages that use symbolic-numeric computation by searching for the name of the package combined with Symbolic-numeric computation, e.g.
wolfram symbolic-numeric computation
A specific example related to neural networks
In the world of neural networks one has to be able to calculate the derivative, however if a derivate can be simplified before calculating then the cost of calculating goes down. Since simplifying the derivative is a one time action while the cost of calculating occurs thousands to millions of times, the simplification is done symbolically and then the calculation is done numerically. Theano is a software package that does this specifically for use with neural networks.
I need to calculate the function y=x/(1+x^2) on a small FPGA in fixed point, can you help me finding the best algorithm?
I thought of those possibilities:
as the FPGA is small I think I will use shift&add/subtract algebra a
multiplier and a divider will use about the same number of cycles,
right?
this function is similar to a digital filter, can I calculate
divisionless using a feedback loop?
I don't think I will have much memory so I'd prefer not using a LUT
Are there other options?
The time requirement isn't very strict so I thought of a simple shift&subtract but if there is something simpler, that might be better.
I am implementing a SPICE solver. I have the following problem: say I put two diodes and a current source in serial (standard diodes). I use MNA and Boost LU-decomposition. The problem is that the nodal matrix becomes very quickly near-singular. I think I have to scale the values but I don't know how and I couldn't find anything on the Internet. Any ideas how to do this scaling?
In the perspective of numerical, there is a scale technique for this kind of near-singular matrices. Basically, this technique is to divide each row of A by the sum (or maximum) of the absolute values in that row. You can find KLU which is a linear solver for circuit simulations for more details.
In perspective of SPICE simulation, it uses so-call Gmin stepping technique to iteratively compute and approach a real answer. You can find this in the documents of a SPICE project QUCS (Quite Universal Circuit Simulator).
Scaling does not help when the matrix has both very large and very small entries.
It is necessary to use some or all of the many tricks that were developed for circuit solver applications. A good start is clipping the range of the exponential and log function arguments to reasonable values -- in most circuits a diode forward voltage is never more than 1V and the diode reverse current not less than 1pA.
Actually, look at all library functions and wrap them in code that makes their arguments and results suitable for circuit-solving purposes. Simple clipping is sometimes good enough, but it is way better to make sure the functions stay (twice) differentiable and continuous.
I am implementing my M.Sc dissertation and in theory aspect of my thesis, i have a big problem.
suppose we want to use genetic algorithms.
we have 2 kind of functions :
a) some functions that have relations like this : ||x1 - x2||>>||f(x1) - f(x2)||
for example : y=(1/10)x^2
b) some functions that have relations like this : ||x1 - x2||<<||f(x1) - f(x2)||
for example : y=x^2
my question is that which of the above kind of functions have more difficulties than other when we want to use genetic algorithms to find optimum ( never mind MINIMUM or MAXIMUM ).
Thank you a lot,
Armin
I don't believe you can answer this question in general without imposing additional constraints.
It's going to depend on the particular type of genetic algorithm you're dealing with. If you use fitness proportional (roulette-wheel) selection, then altering the range of fitness values can matter a great deal. With tournament selection or rank-biased selection, as long as the ordering relations hold between individuals, there will be no effects.
Even if you can say that it does matter, it's still going to be difficult to say which version is harder for the GA. The main effect will be on selection pressure, which causes the algorithm to converge more or less quickly. Is that good or bad? It depends. For a function like f(x)=x^2, converging as fast as possible is probably great, because there's only one optimum, so find it as soon as possible. For a more complex function, slower convergence can be required to find good solutions. So for any given function, scaling and/or translating the fitness values may or may not make a difference, and if it does, the difference may or may not be helpful.
There's probably also a No Free Lunch argument that no single best choice exists over all problems and optimization algorithms.
I'd be happy to be corrected, but I don't believe you can say one way or the other without specifying much more precisely exactly what class of algorithms and problems you're focusing on.
I just recently came across the Kahan (or compensated) summation algorithm for minimizing roundoff, and I'd like to know if there are equivalent algorithms for division and/or multiplication, as well as subtraction (if there happens to be one, I know about associativity). Implementation examples in any language, pseudo-code or links would be great!
Thanks
Subtraction is usually handled via the Kahan method.
For multiplication, there are algorithms to convert a product of two floating-point numbers into a sum of two floating-point numbers without rounding, at which point you can use Kahan summation or some other method, depending on what you need to do next with the product.
If you have FMA (fused multiply-add) available, this can easily be accomplished as follows:
p = a*b;
r = fma(a,b,-p);
After these two operations, if no overflow or underflow occurs, p + r is exactly equal to a * b without rounding. This can also be accomplished without FMA, but it is rather more difficult. If you're interested in these algorithms, you might start by downloading the crlibm documentation, which details several of them.
Division... well, division is best avoided. Division is slow, and compensated division is even slower. You can do it, but it's brutally hard without FMA, and non-trivial with it. Better to design your algorithms to avoid it as much as possible.
Note that all of this becomes a losing battle pretty quickly. There's a very narrow band of situations where these tricks are beneficial--for anything more complicated, it's much better to just use a wider-precision floating point library like mpfr. Unless you're an expert in the field (or want to become one), it's usually best to just learn to use such a library.
Designing algorithms to be numerically stable is an academic discipline and field of research in its own right. It's not something you can do (or learn) meaningfully via "cheat sheets" - it requires specific mathematical knowledge and needs to be done for each specific algorithm. If you want to learn how to do this, the reference in the Wikipedia article sounds pretty good: Nicholas J. Higham, Accuracy and Stability of Numerical Algorithms, Society of Industrial and Applied Mathematics, Philadelphia, 1996. ISBN 0-89871-355-2.
A relatively simple way to diagnose the stability of an algorithm is to use interval arithmetic.
You could use bignums and rational fractions rather than floating point numbers in which case you are limited only by the finite availability of memory to hold the require precision.