How can I use the Galois Field Multiply (GMPY) instruction featured in TI C64x+ DSPs to efficiently compute a CRC32?
Texas Instruments have a code-sample.
See the last pages of this presentation: http://piraten.in/gy0
Related
I am working on designing a mandelbrot viewer and I am designing hardware for squaring values. My squarer is recursively built where a 4bit squarer relies on 2, 2bit squarers. so for my 16 bit squarer, that has 2 8bits squarers, and each one of those has 2 4bit squarer's.
As you can see the recursivity begins to make the design blow up in complexity. To help speed up my design i would like to use a 4input ROM that emulates a 4bit squarer. So when you enter 3 in the rom, it outputs 9, when you enter 15, it outputs 225.
I know that a normal LUT implemented in a logic cell ay have 3 or 4 input variables and only 1 output, but i need an 8 bit output so I need more of a ROM then a LUT.
Any and all help is appreciated, Im curious how the FPGA will store those ROMs and if storing it in ROM would be faster than computing the 4input Square.
-
Jarvi
To square a 4-bit number explicitly using LUTs, you would need to use 8 4-input LUTs. Each LUT's output would give you one bit of the 8-bit product.
The overall size and fmax performance of your design may be achieved with this approach, using larger block RAM primitives (as ROM), dedicated MAC (multiply-accumulate) units, or by using the normal mulitiplication operator * and relying on your synthesis tool's optimization.
You may also want to review some research papers related to this topic, for example here.
One of the AVX-512 instruction set extensions is AVX-512 + GFNI, " Galois Field New Instructions".
Galois theory is about field extensions. What does that have to do with processing vectorized integer or floating-point values? The instructions supposedly perform "Galois field affine transformation", the inverse of that, and "Galois field multiply bytes".
What fields are those? What do these instructions actually do and what is it good for?
These instructions are closely related to the AES (Rijndael) block cipher. GF2P8AFFINEINVQB performs a Rijndael S-Box substitution with a user-defined affine transformation.
GF2P8AFFINEQB is essentially a (carry-less) multiplication of an 8x8 bit matrix with an 8-bit vector in GF(2), so it should be useful in other bit-oriented algorithms. It can also be used to convert between isomorphic representations of GF(28).
GF2P8MULB multiplies two (vectors of) elements of GF(28), actually 8-bit numbers in polynomial representation with the Rijndael reduction polynomial. This operation is used in Rijndael's MixColumns step.
Note that multiplication in finite fields is only loosely related to integer multiplication.
One of the major use-cases is I think SW RAID6 parity, including generating new parity on every write. (Not just during recovery / rebuild). RAID5 can use simple XOR parity for its one and only parity member of each stripe, but RAID6 needs two different parities that can recover N blocks of data from any N of the N+2 blocks of data+parity. This is forward error correction, a similar kind of problem that ECC solves.
Galois Fields are useful for this; they're the basis of widely-used Reed-Solomon codes, for example. e.g. Par2 uses 16-bit Galois Fields to allow very large block counts to generate relatively fine-grained error-recovery data for a large file or set of files. (Up to 64k blocks).
Unfortunately GFNI is not great for PAR2 because GFNI only supports GF2P8 GF(28), not the GF(216) that par2 uses. http://lab.jerasure.org/jerasure/gf-complete/issues/14 says it's possible to use GF2P8AFFINEQB to implement wider word sizes so it might be possible to speed up PAR2 with it.
But it should be useful for RAID6, including generating new parity on writes which is pretty CPU intensive. The Linux kernel's md driver already includes inline asm to use SSE2 or AVX2, one of the few uses of kernel_fpu_begin() and kernel_fpu_end(). (A 2013 paper looks at optimizing GF coding using Intel SIMD, mentioning Linux's md RAID and GF-Complete, the project linked earlier. The current state of the art is something like two pshufb byte shuffles to implement a 4-bit table lookup; GFNI could bring that down to 1 instruction especially if the hard-coded GF polynomial baked into gf2p8mulb is used.)
(RAID6 uses parity in a different way than par2, generating separate parity for each stripe "vertically" across disks, instead of "horizontally" for one big array of data. The underlying math is similar.)
Intel pretty probably plans to support GFNI on some future Silvermont-family Atom because there are legacy-SSE encodings of the instructions, without 3-operand VEX or EVEX. Many other new instructions are introduced with only VEX encodings, including some of the BMI1/BMI2 scalar integer instructions.
Silvermont-family (Airmont, Goldmont, Tremont, ...) gets some use in NAS appliances where most of the CPU demand could come from RAID6. A future version of it with GFNI could save power, or avoid bottlenecks without raising clock speed.
AVX + GFNI implies support for a YMM version (even without AVX2), and AVX512F + GFNI implies a ZMM version. (The HTML extract at felixcloutier.com strangely only mentions the non-VEX 128-bit encoding while also listing a _mm_maskz_gf2p8affine_epi64_epi8 intrinsic (masking requires EVEX). HJLebbink's HTML extract does include the VEX and EVEX forms. Maybe they only appear in Intel's "future extensions" manual which HJ scrapes but Felix doesn't.)
Using 512-bit vectors does limit turbo clock speeds for a short time after (on Skylake-Xeon), so it might not be desirable for the kernel to do that. But it could give a significant reduction in CPU overhead for some cases, if you're not memory-bound.
A "field" is a mathematical concept:
(wikipedia) In mathematics, a field is a set on which addition, subtraction, multiplication, and division are defined and behave as the corresponding operations on rational and real numbers do.
...
including the existence of an additive inverse −a for all elements a, and of a multiplicative inverse b−1 for every nonzero element b
Galois Fields are a kind of Finite Field which have this property: the bits in a GF8 number represent 0 or 1 coefficients of a polynomial of degree 8. (It's quite possible I totally butchered that, but it's something like that rather than place-value.) That's why carryless addition (aka XOR) and carryless multiplication (using shift/XOR instead of shift/add) is useful over Galois fields)
gf2p8mulb's baked-in polynomial of x^8 + x^4 + x^3 + x + 1 matches the one used in AES (Rijndael); this lends more weight to #nwellnhof's hypothesis that Intel just included it because the HW was there.
If it's also used in any other common application, it might give us a clue of the "intended" use case for these instructions.
There is a VAES extension that provides versions of AESENC and related instructions for YMM and ZMM vectors, up from just 128-bit vectors with AES-NI + AVX2. So Intel apparently is extending AES HW to 512-bit SIMD vectors. IDK if this motivates wide GFNI or vice versa, or some of both. (Wide GFNI makes a huge amount of sense; if it was limited to 128-bit, an optimized AVX512 implementation using vpshufb for lookup tables would beat it.)
To answer the purpose part, my guess is that these were added primarily for accelerating SM4 encryption, which shares similarity with AES in design.
This guess comes from the fact that ARM also added SM4 acceleration in ARMv8.4 at around the same time, suggesting that chipmakers want to accelerate this algorithm, probably because it'll gain significant traction in the Chinese market. Also, the fact that it's the only AVX512 extension added in Icelake which also has an SSE encoding, so that Tremont could support it, suggests that they intended it for networking/storage purposes.
GFNI is also quite useful in Reed Solomon coding for error correction (as mentioned by Peter above). It's directly applicable to any GF(28) implementation (such as this) and the affine instruction can be used for other field sizes and polynomials - in fact, it's the fastest technique I know of to do so on an Intel processor.
The affine instruction also has a bunch of out-of-band use cases, including 8-bit shifts and bit-permutes. It's equivalent to RISC-V's bmatxor instruction, where some use cases are listed here.
Some links describing use cases for this instruction.
I have to multiply two very large (~ 2000 X 2000) dense matrices whose entries are floats with arbitrary precision (I am using GMP and the precision is currently set to 600). I was wondering if there is any CUDA library that supports arbitrary precision arithmetics? The only library that I have found is called CAMPARY however it seems to be missing some references to some of the used functions.
The other solution that I was thinking about was implementing a version of the Karatsuba algorithm for multiplying matrices with arbitrary precision entries. The end step of the algorithm would just be multiplying matrices of doubles, which could be done very efficiently using cuBLAS. Is there any similar implementation already out there?
Since nobody has suggested such a library so far, let's assume that one doesn't exist.
You could always implement the naive implementation:
One grid thread for each pair of coordinates in the output matrix.
Each thread performs an inner product of a row and a column in the input matrices.
Individual element operations will use the code taken from the GMP (hopefully not much more than copy-and-paste).
But you can also do better than this - just like you can do better for regular-float matrix multiplication. Here's my idea (likely not the best of course):
Consider the worked example of matrix multiplication using shared memory in the CUDA C Programming Guide. It suggests putting small submatrices in shared memory. You can still do this - but you need to be careful with shared memory sizes (they're small...):
A typical GPU today has 64 KB shared memory usable per grid block (or more)
They take 16 x 16 submatrix.
Times 2 (for the two multiplicands)
Times ceil(801/8) (assuming the GMP representation uses 600 bits from the mantissa, one bit for the sign and 200 bits from the exponent)
So 512 * 101 < 64 KB !
That means you can probably just use the code in their worked example as-is, again replacing the float multiplication and addition with code from GMP.
You may then want to consider something like parallelizing the GMP code itself, i.e. using multiple threads to work together on single pairs of 600-bit-precision numbers. That would likely help your shared memory reading pattern. Alternatively, you could interleave the placement of 4-byte sequences from the representation of your elements, in shared memory, for the same effect.
I realize this is a bit hand-wavy, but I'm pretty certain I've waved my hands correctly and it would be a "simple matter of coding".
Dear I am using xilinx FFT IP cores for FFT transformation but the problem is that FFT IP core takes fixed transformations length of 64,128,256,512,...
is it possible to use transform length of 50 , 100 , 126 etc. ie other than the available transform length of the iP core
is there any other solution to implement FFT for variety of transform lengths
Haider
Generally FFTs are implemented in hardware using a particular architecture called a butterfly. This architecture only works for power of 2 block sizes. It is possible to do arbitrary length FFTs, but the implementation is more complicated. Generally the solution when you need a non-power-of-2 size FFT is to zero pad to the closest power of 2 size. So if you need 50 points, pad it to 64 points with zeros.
I have implemented a MC-Simulation of the 2D Ising model in C99.
Compiling with gcc 4.8.2 on Scientific Linux 6.5.
When I scale up the grid the simulation time increases, as expected.
The implementation simply uses the Metropolis–Hastings algorithm.
I tried to find out a way to speed up the algorithm, but I haven't any good idea ?
Are there some tricks to do so ?
As jimifiki wrote, try to do a profiling session.
In order to improve on the algorithmic side only, you could try the following:
Lookup Table:
When calculating the energy difference for the Metropolis criteria you need to evaluate the exponential exp[-K / T * dE ] where K is your scaling constant (in units of Boltzmann's constant) and dE the energy-difference between the original state and the one after a spin-flip.
Calculating exponentials is expensive
So you simply build a table beforehand where to look up the possible values for the dE. There will be (four choose one plus four choose two plus four choose three plus four choose four) possible combinations for a nearest-neightbour interaction, exploit the problem's symmetry and you get five values fordE: 8, 4, 0, -4, -8. Instead of using the exp-function, use the precalculated table.
Parallelization:
As mentioned before, it is possible to parallelize the algorithm. To preserve the physical correctness, you have to use a so-called checkerboard concept. Consider the two-dimensional grid as a checkerboard and compute only the white cells parallel at once, then the black ones. That should be clear, considering the nearest-neightbour interaction which introduces dependencies of the values.
Use GPGPU:
You can also implement the simulation on a GPGPU, e.g. using CUDA, if you're already working on C99.
Some tips:
- Don't forget to align C99-structs properly.
- Use linear Arrays, not that nested ones. Aligned memory is normally faster to access, if done properly.
- Try to let the compiler do loop-unrolling, etc. (gcc special options, not default on O2)
Some more information:
If you look for an efficient method to calculate the critical point of the system, the method of choice would be finite-size scaling where you simulate at different system-sizes and different temperature, then calculate a value which is system-size independet at the critical point, therefore an intersection point of the corresponding curves (please see the theory to get a detailed explaination)
I hope I was helpful.
Cheers...
It's normal that your simulation times scale at least with the square of the size. Isn't it?
Here some subjestions:
If you are concerned with thermalization issues, try to use parallel tempering. It can be of help.
The Metropolis-Hastings algorithm can be made parallel. You could try to do it.
Check you are not pessimizing the code.
Are your spin arrays of ints? You could put many spins on the same int. It's a lot of work.
Moreover, remember what Donald taught us:
premature optimisation is the root of all evil
Before optimising you should first understand where your program is slow. This is called profiling.