I would like to ask a question about matrix multiplication in HDL. For 6 months I have been learning about FPGAs and ASIC design, but still do not have the enough experience for programming FPGAs using Verilog/VHDL. I had a quick search and found that Verily is suitable for me. Anyway, you just suppose me as a beginner and till now I only followed simple tutorials made of using Xilinx Spartan 3E-XCS1600E MicroBlaze Starter Kit, because I have it, too.
The most challenging part for me was to create matrices in Verilog. If I am able to create matrices and fill them with integers first, then I can move on the next step matrices with floating numbers. In advance, I also want to take inverse of these matrix and seems hard to me extremely.
My question is, what should I do in order to multiply matrices? Is there any tricky or easier way to do that like in C language? (I know Verilog is a HDL and we cannot think on that way). Also how can I convert my floating numbers to fixed or integer type? Then I think I can solve my problem in this way. I looked trough other questions but did not understand well. Thanks for your response and help.
Bonus Question: If I try these operations on MATLAB or Simulink, could it be easier to convert it to HDL using HDL Coder? If it is, could you guide me to do that?
Regards,
Leonardo
You can create matrices with RAM in hardware design. Actually, everything can be described as RAM:)
Of course only integer can be supported in Verilog, but we do have some method that can create and compute float numbers.
Define a float syntax. Suppose that we have reg var[7:0], we can assume var[7:4] is the integer part and var[3:0] is the decimal part. Like 8'b0101_1001 equals 5.9 in DEC. You must limit the range of var[3:0] to 0~9!
IEEE 754. http://grouper.ieee.org/groups/754/ This standard has been widely used in many areas, but I think it will be a little difficult for you.
Deal with a matrices is nothing special, just follow what you have learned in math class.
I'm not good at English. Hope you can understand.
Related
I'm going to raise again a very general question relating to Eigen/Eigen3 matrix library support for different matrix support "fields/representations" for operations.
I've analyzed a bit the Eigen matrix template library, and so far, I've only seen suffort for floating point real numbers arithmetic (that is IEEE754 single precision 32 bits and double precision 64 bits floating point numbers).
I would like to raise a question concerning fixed-size precision real numbers arithmetic support for Eigen/Eigen3:
is there any support for fixed precision vectorization in Eigen/Eigen3 ?
if not so, what would be necessary to implement such a suport ?
can standard decomposition routines and matrix operations be immediately implemented using fixed size precision ? If so, how ?
if not so, what are the pre-requisites for such a support (concepts, operators overloads, "real" functions required to be implemented, etc...) in order to implement such operations/decompositions without impairing Eigen's core ?
are there any plans to implement such functionnalities into the core of Eigen/Eigen3 ?
If such kind of things aren't foreseen in the near future,
does there exist already any kind of such functionnality that you are aware of and that would be compatible with Eigen/Eigen3 in order to fully implement vectorization/optimizations ?
if not so, which approach would you recomment if s.o. was interested in implementing it ?
I would like to know the feasibility to implement a few matrix computations onto a 16- or 32-bits micro-controller. I'm not aware of any such kind of things that are disclosed under GPL licencing scheme, and would be geatly interested if such thing would be usable. If not, I would like to assess workload necessary to implement it.
Thanking anyone in advance for help.
I have a problem that requires me to do eigendecomposition and matrix multiplication of many (~4k) small (~3x3) square Hermitian matrices. In particular, I need each work item to perform eigendecomposition of one such matrix, and then perform two matrix multiplications. Thus, the work that each thread has to do is rather minimal, and the full job should be highly parallelizable.
Unfortunately, it seems all the available OpenCL LAPACKs are for delegating operations on large matrices to the GPU rather than for doing smaller linear algebra operations inside an OpenCL kernel. As I'd rather not implement matrix multiplcation and eigendecomposition for arbitrarily sized matrices in OpenCL myself, I was hoping someone here might know of a suitable library for the job?
I'm aware that OpenCL might be getting built-in matrix operations at some point since the matrix type is reserved, but that is not really of much use right now. There is a similar question here from 2011, but it pretty much just says to roll your own, so I'm hoping the situation has improved since then.
In general, my experience with libraries like LAPACK, fftw, cuFFT, etc. is that when you want to do many really small problems like this, you are better off writing your own for performance. Those libraries are usually written for generality, so you can often beat their performance for specific small problems, especially if you can use unique properties of your particular problem.
I realize you don't want to hear "roll your own" but for this type of problem it is really the best thing to do IMO. You might find a library to do this, but considering the code that you really want (for performance) will not generalize, I doubt it exists. You'll be looking specifically for code to find the eigenvalues of 3x3 matrices. That's less of a library and more of a random code snippet with a suitable license that you can manipulate to take advantage of your specific problem.
In this specific case, you can find the eigenvalues of a 3x3 matrix with the textbook method using the characteristic polynomial. Remember that there is a relatively simple closed form solution for cubic equations: http://en.wikipedia.org/wiki/Cubic_function#General_formula_for_roots.
While I think it is very likely that this approach would be much faster than iterative methods, it would be wise to verify that if performance is an issue.
I am working with floating point arithmetic that involves decision making by the use of conditioning such as if...else etc. The algorithm works fine but I doubt that it's not been optimized to get the best results. I want to know that how can I improve the numerical stability by reducing the error in floating point numbers during comparison. I'm using C language in my project. Any suggestions will be greatly appreciated. Thanks
If you need better precision than one of the built in floating point standards, then a third party library or creating your own number system are about the only options. GNU Multiple Precision, is an option.
I remember solving a lot of indefinite integration problems. There are certain standard methods of solving them, but nevertheless there are problems which take a combination of approaches to arrive at a solution.
But how can we achieve the solution programatically.
For instance look at the online integrator app of Mathematica. So how do we approach to write such a program which accepts a function as an argument and returns the indefinite integral of the function.
PS. The input function can be assumed to be continuous(i.e. is not for instance sin(x)/x).
You have Risch's algorithm which is subtly undecidable (since you must decide whether two expressions are equal, akin to the ubiquitous halting problem), and really long to implement.
If you're into complicated stuff, solving an ordinary differential equation is actually not harder (and computing an indefinite integral is equivalent to solving y' = f(x)). There exists a Galois differential theory which mimics Galois theory for polynomial equations (but with Lie groups of symmetries of solutions instead of finite groups of permutations of roots). Risch's algorithm is based on it.
The algorithm you are looking for is Risch' Algorithm:
http://en.wikipedia.org/wiki/Risch_algorithm
I believe it is a bit tricky to use. This book:
http://www.amazon.com/Algorithms-Computer-Algebra-Keith-Geddes/dp/0792392590
has description of it. A 100 page description.
You keep a set of basic forms you know the integrals of (polynomials, elementary trigonometric functions, etc.) and you use them on the form of the input. This is doable if you don't need much generality: it's very easy to write a program that integrates polynomials, for example.
If you want to do it in the most general case possible, you'll have to do much of the work that computer algebra systems do. It is a lifetime's work for some people, e.g. if you look at Risch's "algorithm" posted in other answers, or symbolic integration, you can see that there are entire multi-volume books ("Manuel Bronstein, Symbolic Integration Volume I: Springer") that have been written on the topic, and very few existing computer algebra systems implement it in maximum generality.
If you really want to code it yourself, you can look at the source code of Sage or the several projects listed among its components. Of course, it's easier to use one of these programs, or, if you're writing something bigger, use one of these as libraries.
These expert systems usually have a huge collection of techniques and simply try one after another.
I'm not sure about WolframMath, but in Maple there's a command that enables displaying all intermediate steps. If you do so, you get as output all the tried techniques.
Edit:
Transforming the input should not be the really tricky part - you need to write a parser and a lexer, that transforms the textual input into an internal representation.
Good luck. Mathematica is very complex piece of software, and symbolic manipulation is something that it does the best. If you are interested in the topic take a look at these books:
http://www.amazon.com/Computer-Algebra-Symbolic-Computation-Elementary/dp/1568811586/ref=sr_1_3?ie=UTF8&s=books&qid=1279039619&sr=8-3-spell
Also, going to the source wouldn't hurt either. These book actually explains the inner workings of mathematica
http://www.amazon.com/Mathematica-Book-Fourth-Stephen-Wolfram/dp/0521643147/ref=sr_1_7?ie=UTF8&s=books&qid=1279039687&sr=1-7
Both Wolfram Alpha and Bing are now providing the ability to solve complex, algebraic logic problems (ie "solve for x, given this equation"), and not just evaluate simple arithmetic expressions (eg "what's 5+5?"). How is this done?
I can read most types of code that might get thrown at me, so it doesn't really make a difference what you use to explain and represent the algorithm. I find that bash makes a really good pseudo-code, not to mention its actually functional, so that'd be ideal. Also, I'm fairly familiar with its in's and out's. Sorry to go ranting on a tangent, but it really irritates me to see people spend effort on crunching out "pseudocode" when they could be getting something 100% functional for just slightly more effort. Anyways, thanks so much for advance.
There are 2 main methods to solve:
Numeric methods. Numerical methods mean, basically, that the solver tries to change the value of x until the equation is satisfied. More info on numerical methods.
Symbolic math. The solver manipulates the equation as a string of symbols, by a number of formal rules. It's not that different from algebra we learn in school, the solver just knows a lot of different rules. More info on computer algebra.
Wolfram|Alpha (W|A) is based on the Mathematica kernel, combined with a natural language parser (which is also built primarily with Mathematica). They have a whole heap of curated data and associated formula that can be used once the question has been interpreted.
There's a blog post describing some of this which came out at the same time as W|A.
Finally, Bing simply uses the (non-free) API to answer questions via W|A.