Is integer multiplier physically in integer unit or special function unit? - cpu

I'm using McPAT, a tool for estimating CPU power, however, it seems the integer multiplier is counted as a special function unit. Why is that? Shouldnt it be in the integer unit instead?
And shouldn't special function unit only be concerned with transcendental functions such as sin, cos, rcp?

Integer multiplication is often pipelined, making it take multiple cycles to complete. As such, breaking it out into its own functional unit allows other integer operations (e.g, add/sub and bitwise operations) to run in a single cycle, possibly even while a multiply is in flight.
It's certainly not true that it's always a separate functional unit, though! Some CPUs (even small ones, like the ARM Cortex-M3 and M4 microcontrollers) have single-cycle multiply, which could very well be handled as part of the same FU as other integer operations.

Related

Is it still worth using the Quake fast inverse square root algorithm nowadays on x86-64?

Specifically, this is the code I'm talking about:
float InvSqrt(float x) {
float xhalf = 0.5f*x;
int i = *(int*)&x; // warning: strict-aliasing UB, use memcpy instead
i = 0x5f375a86- (i >> 1);
x = *(float*)&i; // same
x = x*(1.5f-xhalf*x*x);
return x;
}
I forgot where I got this from but it's apparently better and more efficient or precise than the original Quake III algorithm (slightly different magic constant), but it's been more than 2 decades since this algorithm was created, and I just want to know if it's still worth using it in terms of performance, or if there's an instruction that implements it already in modern x86-64 CPUs.
Origins:
See John Carmack's Unusual Fast Inverse Square Root (Quake III)
Modern usefulness: none, obsoleted by SSE1 rsqrtss
Use _mm_rsqrt_ps or ss to get a very approximate reciprocal-sqrt for 4 floats in parallel, much faster than even a good compiler could do with this (using SSE2 integer shift/add instructions to keep the FP bit pattern in an XMM register, which is probably not how it would actually compile with the type-pun to integer. Which is strict-aliasing UB in C or C++; use memcpy or C++20 std::bit_cast.)
https://www.felixcloutier.com/x86/rsqrtss documents the scalar version of the asm instruction, including the |Relative Error| ≤ 1.5 ∗ 2−12 guarantee. (i.e. about half the mantissa bits are correct.) One Newton-Raphson iteration can refine it to within 1ulp of being correct, although still not the 0.5ulp you'd get from actual sqrt. See Fast vectorized rsqrt and reciprocal with SSE/AVX depending on precision)
rsqrtps performs only slightly slower than a mulps / mulss instruction on most CPUs, like 5 cycle latency, 1/clock throughput. (With a Newton iteration to refine it, more uops.) Latency various by microarchitecture, as low as 3 uops in Zen 3, but Intel runs it with about 5c latency since Conroe at least (https://uops.info/).
The integer shift / subtract from the magic number in the Quake InvSqrt similarly provides an even rougher initial-guess, and the rest (after type-punning the bit-pattern back to a float is a Newton Raphson iteration.
Compilers will even use rsqrtss for you when compiling sqrt with -ffast-math, depending on context and tuning options. (e.g. modern clang compiling 1.0f/sqrtf(x) with -O3 -ffast-math -march=skylake https://godbolt.org/z/fT86bKesb uses vrsqrtss and 3x vmulss plus an FMA.) Non-reciprocal sqrt is usually not worth it, but rsqrt + refinement avoids a division as well as a sqrt.
Full-precision square root and division themselves are not as slow as they used to be, at least if you use them infrequently compared to mul/add/sub. (e.g. if you can hide the latency, one sqrt every 12 or so other operations might cost about the same, still a single uop instead of multiple for rsqrt + Newton iteration.) See Floating point division vs floating point multiplication
But sqrt and div do compete with each other for throughput so needing to divide by a square root is a nasty case.
So if you have a bad loop over an array that mostly just does sqrt, not mixed with other math operations, that's a use-case for _mm_rsqrt_ps (and a Newton iteration) as a higher throughput approximation than _mm_sqrt_ps
But if you can combine that pass with something else to increase computational intensity and get more work done overlapped with keeping the div/sqrt unit, often it's better to use a real sqrt instruction on its own, since that's still just 1 uop for the front-end to issue, and for the back-end to track and execute. vs. a Newton iteration taking something like 5 uops if FMA is available for reciprocal square root, else more (also if non-reciprocal sqrt is needed).
With Skylake for example having 1 per 3 cycle sqrtps xmm throughput (128-bit vectors), it costs the same as a mul/add/sub/fma operation if you don't do more than one per 6 math operations. (Throughput is worse for 256-bit YMM vectors, 6 cycles.) A Newton iteration would cost more uops, so if uops for port 0/1 are the bottleneck, it's a win to just use sqrt directly. (This is assuming that out-of-order exec can hide the latency, typically when each loop iteration is independent.) This kind of situation is common if you're using a polynomial approximation as part of something like log or exp in a loop.
See also Fast vectorized rsqrt and reciprocal with SSE/AVX depending on precision re: performance on modern OoO exec CPUs.

Should I use double data structure to store very large Integer values?

int types have a very low range of number it supports as compared to double. For example I want to use a integer number with a high range. Should I use double for this purpose. Or is there an alternative for this.
Is arithmetic slow in doubles ?
Whether double arithmetic is slow as compared to integer arithmetic depends on the CPU and the bit size of the integer/double.
On modern hardware floating point arithmetic is generally not slow. Even though the general rule may be that integer arithmetic is typically a bit faster than floating point arithmetic, this is not always true. For instance multiplication & division can even be significantly faster for floating point than the integer counterpart (see this answer)
This may be different for embedded systems with no hardware support for floating point. Then double arithmetic will be extremely slow.
Regarding your original problem: You should note that a 64 bit long long int can store more integers exactly (2^63) while double can store integers only up to 2^53 exactly. It can store higher numbers though, but not all integers: they will get rounded.
The nice thing about floating point is that it is much more convenient to work with. You have special symbols for infinity (Inf) and a symbol for undefined (NaN). This makes division by zero for instance possible and not an exception. Also one can use NaN as a return value in case of error or abnormal conditions. With integers one often uses -1 or something to indicate an error. This can propagate in calculations undetected, while NaN will not be undetected as it propagates.
Practical example: The programming language MATLAB has double as the default data type. It is used always even for cases where integers are typically used, e.g. array indexing. Even though MATLAB is an intepreted language and not so fast as a compiled language such as C or C++ is is quite fast and a powerful tool.
Bottom line: Using double instead of integers will not be slow. Perhaps not most efficient, but performance hit is not severe (at least not on modern desktop computer hardware).

What are the AVX-512 Galois-field-related instructions for?

One of the AVX-512 instruction set extensions is AVX-512 + GFNI, " Galois Field New Instructions".
Galois theory is about field extensions. What does that have to do with processing vectorized integer or floating-point values? The instructions supposedly perform "Galois field affine transformation", the inverse of that, and "Galois field multiply bytes".
What fields are those? What do these instructions actually do and what is it good for?
These instructions are closely related to the AES (Rijndael) block cipher. GF2P8AFFINEINVQB performs a Rijndael S-Box substitution with a user-defined affine transformation.
GF2P8AFFINEQB is essentially a (carry-less) multiplication of an 8x8 bit matrix with an 8-bit vector in GF(2), so it should be useful in other bit-oriented algorithms. It can also be used to convert between isomorphic representations of GF(28).
GF2P8MULB multiplies two (vectors of) elements of GF(28), actually 8-bit numbers in polynomial representation with the Rijndael reduction polynomial. This operation is used in Rijndael's MixColumns step.
Note that multiplication in finite fields is only loosely related to integer multiplication.
One of the major use-cases is I think SW RAID6 parity, including generating new parity on every write. (Not just during recovery / rebuild). RAID5 can use simple XOR parity for its one and only parity member of each stripe, but RAID6 needs two different parities that can recover N blocks of data from any N of the N+2 blocks of data+parity. This is forward error correction, a similar kind of problem that ECC solves.
Galois Fields are useful for this; they're the basis of widely-used Reed-Solomon codes, for example. e.g. Par2 uses 16-bit Galois Fields to allow very large block counts to generate relatively fine-grained error-recovery data for a large file or set of files. (Up to 64k blocks).
Unfortunately GFNI is not great for PAR2 because GFNI only supports GF2P8 GF(28), not the GF(216) that par2 uses. http://lab.jerasure.org/jerasure/gf-complete/issues/14 says it's possible to use GF2P8AFFINEQB to implement wider word sizes so it might be possible to speed up PAR2 with it.
But it should be useful for RAID6, including generating new parity on writes which is pretty CPU intensive. The Linux kernel's md driver already includes inline asm to use SSE2 or AVX2, one of the few uses of kernel_fpu_begin() and kernel_fpu_end(). (A 2013 paper looks at optimizing GF coding using Intel SIMD, mentioning Linux's md RAID and GF-Complete, the project linked earlier. The current state of the art is something like two pshufb byte shuffles to implement a 4-bit table lookup; GFNI could bring that down to 1 instruction especially if the hard-coded GF polynomial baked into gf2p8mulb is used.)
(RAID6 uses parity in a different way than par2, generating separate parity for each stripe "vertically" across disks, instead of "horizontally" for one big array of data. The underlying math is similar.)
Intel pretty probably plans to support GFNI on some future Silvermont-family Atom because there are legacy-SSE encodings of the instructions, without 3-operand VEX or EVEX. Many other new instructions are introduced with only VEX encodings, including some of the BMI1/BMI2 scalar integer instructions.
Silvermont-family (Airmont, Goldmont, Tremont, ...) gets some use in NAS appliances where most of the CPU demand could come from RAID6. A future version of it with GFNI could save power, or avoid bottlenecks without raising clock speed.
AVX + GFNI implies support for a YMM version (even without AVX2), and AVX512F + GFNI implies a ZMM version. (The HTML extract at felixcloutier.com strangely only mentions the non-VEX 128-bit encoding while also listing a _mm_maskz_gf2p8affine_epi64_epi8 intrinsic (masking requires EVEX). HJLebbink's HTML extract does include the VEX and EVEX forms. Maybe they only appear in Intel's "future extensions" manual which HJ scrapes but Felix doesn't.)
Using 512-bit vectors does limit turbo clock speeds for a short time after (on Skylake-Xeon), so it might not be desirable for the kernel to do that. But it could give a significant reduction in CPU overhead for some cases, if you're not memory-bound.
A "field" is a mathematical concept:
(wikipedia) In mathematics, a field is a set on which addition, subtraction, multiplication, and division are defined and behave as the corresponding operations on rational and real numbers do.
...
including the existence of an additive inverse −a for all elements a, and of a multiplicative inverse b−1 for every nonzero element b
Galois Fields are a kind of Finite Field which have this property: the bits in a GF8 number represent 0 or 1 coefficients of a polynomial of degree 8. (It's quite possible I totally butchered that, but it's something like that rather than place-value.) That's why carryless addition (aka XOR) and carryless multiplication (using shift/XOR instead of shift/add) is useful over Galois fields)
gf2p8mulb's baked-in polynomial of x^8 + x^4 + x^3 + x + 1 matches the one used in AES (Rijndael); this lends more weight to #nwellnhof's hypothesis that Intel just included it because the HW was there.
If it's also used in any other common application, it might give us a clue of the "intended" use case for these instructions.
There is a VAES extension that provides versions of AESENC and related instructions for YMM and ZMM vectors, up from just 128-bit vectors with AES-NI + AVX2. So Intel apparently is extending AES HW to 512-bit SIMD vectors. IDK if this motivates wide GFNI or vice versa, or some of both. (Wide GFNI makes a huge amount of sense; if it was limited to 128-bit, an optimized AVX512 implementation using vpshufb for lookup tables would beat it.)
To answer the purpose part, my guess is that these were added primarily for accelerating SM4 encryption, which shares similarity with AES in design.
This guess comes from the fact that ARM also added SM4 acceleration in ARMv8.4 at around the same time, suggesting that chipmakers want to accelerate this algorithm, probably because it'll gain significant traction in the Chinese market. Also, the fact that it's the only AVX512 extension added in Icelake which also has an SSE encoding, so that Tremont could support it, suggests that they intended it for networking/storage purposes.
GFNI is also quite useful in Reed Solomon coding for error correction (as mentioned by Peter above). It's directly applicable to any GF(28) implementation (such as this) and the affine instruction can be used for other field sizes and polynomials - in fact, it's the fastest technique I know of to do so on an Intel processor.
The affine instruction also has a bunch of out-of-band use cases, including 8-bit shifts and bit-permutes. It's equivalent to RISC-V's bmatxor instruction, where some use cases are listed here.
Some links describing use cases for this instruction.

Fastest way to implement floating-point multiplication by a small integer constant

Suppose you are trying to multiply a floating-point number k by a small integer constant n (by small I mean -20 <= n <= 20). The naive way of doing this is converting n to a floating point number (which for the purposes of this question does not count towards the runtime) and executing a floating-point multiply. However, for n = 2, it seems likely that k + k is a faster way of computing it. At what n does the multiply instruction become faster than repeated additions (plus an inversion at the end if n < 0)?
Note that I am not particularly concerned about accuracy here; I am willing to allow unsound optimizations as long as they get roughly the right answer (i.e.: up to 1024 ULP error is probably fine).
I am writing OpenCL code, so I'm interested in the answer to this question in many computational contexts (x86-64, x86-64 + AVX256, GPUs).
I could benchmark this, but since I don't have a particular architecture in mind, I'd prefer a theoretical justification of the choice.
According to AMD's OpenCL optimisation guide for GPUs, section 3.8.1 "Instruction Bandwidths", for single-precision floating point operands, addition, multiplication and 'MAD' (multiply-add) all have a throughput of 5 per cycle on GCN based GPUs. The same is true for 24-bit integers. Only once you move to 32-bit integers are multiplications much more expensive (1/cycle). Int-to-float conversions and vice versa are also comparatively slow (1/cycle), and unless you have a double-precision float capable model (mostly FirePro/Radeon Pro series or Quadro/Tesla from nvidia) operations on doubles are super slow (<1/cycle). Negation is typically "free" on GPUs - for example GCN has sign flags on instruction operands, so -(a + b) compiles to one instruction after transforming to (-a) + (-b).
Nvidia GPUs tend to be a bit slower at integer operations, for floats it's a similar story to AMD's though: multiplications are just as fast as addition, and if you can combine them into MAD operations, you can double throughput. Intel's GPUs are quite different in other regards, but again they're very fast at FP multiplication and addition.
Basically, it's really hard to beat a GPU at floating-point multiplication, as that's essentially the one thing they're optimised for.
On the CPU it's typically more complicated - Agner Fog's optimisation resources and instruction tables are the place to go for the details. Note though that on many CPUs you'll pay a penalty for interpreting float data as integer and back because ALU and FPU are typically separate. (For example if you wanted to optimise multiplying floats by a power of 2 by performing an integer addition on their exponents. On x86, you can easily do this by operating on SSE or AVX registers using first float instructions, then integer ones, but it's generally not good for performance.)

Can long integer routines benefit from SSE?

I'm still working on routines for arbitrary long integers in C++. So far, I have implemented addition/subtraction and multiplication for 64-bit Intel CPUs.
Everything works fine, but I wondered if I can speed it a bit by using SSE. I browsed through the SSE docs and processor instruction lists, but I could not find anything I think I can use and here is why:
SSE has some integer instructions, but most instructions handle floating point. It doesn't look like it was designed for use with integers (e.g. is there an integer compare for less?)
The SSE idea is SIMD (same instruction, multiple data), so it provides instructions for 2 or 4 independent operations. I, on the other hand, would like to have something like a 128 bit integer add (128 bit input and output). This doesn't seem to exist. (Yet? In AVX2 maybe?)
The integer additions and subtractions handle neither input nor output carries. So it's very cumbersome (and thus, slow) to do it by hand.
My question is: is my assessment correct or is there anything I have overlooked? Can long integer routines benefit from SSE? In particular, can they help me to write a quicker add, sub or mul routine?
In the past, the answer to this question was a solid, "no". But as of 2017, the situation is changing.
But before I continue, time for some background terminology:
Full Word Arithmetic
Partial Word Arithmetic
Full-Word Arithmetic:
This is the standard representation where the number is stored in base 232 or 264 using an array of 32-bit or 64-bit integers.
Many bignum libraries and applications (including GMP) use this representation.
In full-word representation, every integer has a unique representation. Operations like comparisons are easy. But stuff like addition are more difficult because of the need for carry-propagation.
It is this carry-propagation that makes bignum arithmetic almost impossible to vectorize.
Partial-Word Arithmetic
This is a lesser-used representation where the number uses a base less than the hardware word-size. For example, putting only 60 bits in each 64-bit word. Or using base 1,000,000,000 with a 32-bit word-size for decimal arithmetic.
The authors of GMP call this, "nails" where the "nail" is the unused portion of the word.
In the past, use of partial-word arithmetic was mostly restricted to applications working in non-binary bases. But nowadays, it's becoming more important in that it allows carry-propagation to be delayed.
Problems with Full-Word Arithmetic:
Vectorizing full-word arithmetic has historically been a lost cause:
SSE/AVX2 has no support for carry-propagation.
SSE/AVX2 has no 128-bit add/sub.
SSE/AVX2 has no 64 x 64-bit integer multiply.*
*AVX512-DQ adds a lower-half 64x64-bit multiply. But there is still no upper-half instruction.
Furthermore, x86/x64 has plenty of specialized scalar instructions for bignums:
Add-with-Carry: adc, adcx, adox.
Double-word Multiply: Single-operand mul and mulx.
In light of this, both bignum-add and bignum-multiply are difficult for SIMD to beat scalar on x64. Definitely not with SSE or AVX.
With AVX2, SIMD is almost competitive with scalar bignum-multiply if you rearrange the data to enable "vertical vectorization" of 4 different (and independent) multiplies of the same lengths in each of the 4 SIMD lanes.
AVX512 will tip things more in favor of SIMD again assuming vertical vectorization.
But for the most part, "horizontal vectorization" of bignums is largely still a lost cause unless you have many of them (of the same size) and can afford the cost of transposing them to make them "vertical".
Vectorization of Partial-Word Arithmetic
With partial-word arithmetic, the extra "nail" bits enable you to delay carry-propagation.
So as long as you as you don't overflow the word, SIMD add/sub can be done directly. In many implementations, partial-word representation uses signed integers to allow words to go negative.
Because there is (usually) no need to perform carryout, SIMD add/sub on partial words can be done equally efficiently on both vertically and horizontally-vectorized bignums.
Carryout on horizontally-vectorized bignums is still cheap as you merely shift the nails over the next lane. A full carryout to completely clear the nail bits and get to a unique representation usually isn't necessary unless you need to do a comparison of two numbers that are almost the same.
Multiplication is more complicated with partial-word arithmetic since you need to deal with the nail bits. But as with add/sub, it is nevertheless possible to do it efficiently on horizontally-vectorized bignums.
AVX512-IFMA (coming with Cannonlake processors) will have instructions that give the full 104 bits of a 52 x 52-bit multiply (presumably using the FPU hardware). This will play very well with partial-word representations that use 52 bits per word.
Large Multiplication using FFTs
For really large bignums, multiplication is most efficiently done using Fast-Fourier Transforms (FFTs).
FFTs are completely vectorizable since they work on independent doubles. This is possible because fundamentally, the representation that FFTs use is
a partial word representation.
To summarize, vectorization of bignum arithmetic is possible. But sacrifices must be made.
If you expect SSE/AVX to be able to speed up some existing bignum code without fundamental changes to the representation and/or data layout, that's not likely to happen.
But nevertheless, bignum arithmetic is possible to vectorize.
Disclosure:
I'm the author of y-cruncher which does plenty of large number arithmetic.

Resources