What are the errors in these avx2 intrinsics, and how to use the raw assembler [duplicate] - gcc

This question already has answers here:
inlining failed in call to always_inline '__m256d _mm256_broadcast_sd(const double*)'
(1 answer)
The Effect of Architecture When Using SSE / AVX Intrinisics
(2 answers)
Closed 2 years ago.
The Intel reference manual is fairly opaque on the details of how instructions are used. There are no examples on each instruction page.
https://software.intel.com/sites/default/files/4f/5b/36945
It's possible we haven't found the right page, or the right manual yet. In any case, the syntax of gnu as might be different.
So I thought perhaps we could call intrinsics, step in and view the opcodes in gdb, and from that perhaps learn the opcodes in gnu as. we also searched for the opcodes for avx2 in gnu as but didn't find them documented.
Starting with the c++ code that isn't compiling:
#include <immintrin.h>
int main() {
__m256i a = _mm256_set_epi32(1, 2, 3, 4, 5, 6, 7, 8); // seems to work
double x = 1.0, y = 2.0, z = 3.0;
__m256d c,d;
__m256d b = _mm256_loadu_pd(&x);
// __m256d c = _mm256_loadu_pd(&y);
// __m256d d = _mm256_loadu_pd(&z);
d = _mm256_fmadd_pd(b, c, d); // c += a * b
_mm256_add_pd(b, c);
}
g++ -g
We would like to be able to load a vector register %ymm0 with a single value, which appears to be supported by the intrinsic: _mm256_loadu_pd.
Fused multiply-add and add are also there. All intrinsics except the first give errors.
/usr/lib/gcc/x86_64-linux-gnu/9/include/fmaintrin.h:47:1: error: inlining failed in call to always_inline ‘__m256d _mm256_fmadd_pd(__m256d, __m256d, __m256d)’: target specific option mismatch
47 | _mm256_fmadd_pd (__m256d __A, __m256d __B, __m256d __C)
| ^~~~~~~~~~~~~~~
Next, what are the syntax of the underlying assembler instructions? If you could point to a manual showing them that would be very helpful.

Related

After two mov in memory register system crash

I'm trying to perform some write in memory registers from user space ( through a custom driver ). I want to write three 64-bit integer and I initialized the variables "value_1, value_2 and value_3" to uint64_t type.
I must use the gcc inline mov instruction and I'm working on an ARM 64-bit architecture on a custom version of linux for an embedded system.
Thi is my code:
asm ( "MOV %[reg1], %[val1]\t\n"
"MOV %[reg2], %[val2]\t\n"
"MOV %[reg3], %[val3]\t\n"
:[reg1] "=&r" (*register_1),[arg2] "=&r" (*register_2), [arg3] "=&r" (*register_3)
:[val1] "r"(value_1),[val2] "r" (value_2), [val3] "r" (value_3)
);
The problem is strange...
If I perform just two MOV, the code works.
If I perform all the three MOV, the entire system crash and I have to reboot the entire system.
even stranger...
If I put a "printf" o even a nanosleep with 0 nanosecond between the second and the third MOV, the code works!
I looked around trying to find a solution and I also use the clobber of the memory:
asm ( "MOV %[reg1], %[val1]\t\n"
"MOV %[reg2], %[val2]\t\n"
"MOV %[reg3], %[val3]\t\n"
:[reg1] "=&r" (*register_1),[arg2] "=&r" (*register_2), [arg3] "=&r" (*register_3)
:[val1] "r"(value_1),[val2] "r" (value_2), [val3] "r" (value_3)
:"memory"
);
...doen't work!
I used also the memory barrier macro between the second and the third MOV or at the end of the three MOV:
asm volatile("": : :"memory")
..doesn't work!
Also, I tried to write directly into the register using pointers and I had the same behavior: after the second write the system crash...
Anybody can suggest me a solution..or tell me if I'm using the gcc inline MOV or the memory barrier in a wrong way?
----> MORE DETAILS <-----
This is my main:
int main()
{
int dev_fd;
volatile void * base_addr = NULL;
volatile uint64_t * reg1_addr = NULL;
volatile uint32_t * reg2_addr = NULL;
volatile uint32_t * reg3_addr = NULL;
dev_fd = open(MY_DEVICE, O_RDWR);
if (dev_fd < 0)
{
perror("Open call failed");
return -1;
}
base_addr = mmap(NULL, PAGE_SIZE, PROT_READ|PROT_WRITE, MAP_SHARED, xsmll_dev_fd, 0);
if (base_addr == MAP_FAILED)
{
perror("mmap operation failed");
return -1;
}
printf("BASE ADDRESS VIRT: 0x%p\n", base_addr);
/* Preparing the registers */
reg1_addr = base_addr + REG1_OFF;
reg2_addr = base_addr + REG2_OFF;
reg3_addr = base_addr + REG3_OFF;
uint64_t val_1 = 0xEEEEEEEE;
uint64_t val_2 = 0x00030010;
uint64_t val_3 = 0x01;
asm ( "str %[val1], %[reg1]\t\n"
"str %[val2], %[reg2]\t\n"
"str %[val3], %[reg3]\t\n"
:[reg1] "=&m" (*reg1_addr),[reg2] "=&m" (*reg2_addr), [reg3] "=&m" (*reg3_addr)
:[val1] "r"(val_1),[val2] "r" (val_2), [val3] "r" (val_3)
);
printf("--- END ---\n");
close(dev_fd);
return 0;
}
This is the output of the compiler regarding the asm statement (linaro..I cross compile):
400bfc: f90013a0 str x0, [x29,#32]
400c00: f94027a3 ldr x3, [x29,#72]
400c04: f94023a4 ldr x4, [x29,#64]
400c08: f9402ba5 ldr x5, [x29,#80]
400c0c: f9401ba0 ldr x0, [x29,#48]
400c10: f94017a1 ldr x1, [x29,#40]
400c14: f94013a2 ldr x2, [x29,#32]
400c18: f9000060 str x0, [x3]
400c1c: f9000081 str x1, [x4]
400c20: f90000a2 str x2, [x5]
Thank you!
I tried with *reg1_addr = val_1; and I have the same problem.
Then this code isn't the problem. Avoiding asm is just a cleaner way to get equivalent machine code, without having to use inline asm. Your problem is more likely your choice of registers and values, or the kernel driver.
Or do you need the values to be in CPU registers before writing the first mmaped location, to avoid loading anything from the stack between stores? That's the only reason I can think of that you'd need inline asm, where compiler-generated stores might not be equivalent.
Answer to original question:
An "=&r" output constraint means a CPU register. So your inline-asm instructions will run in that order, assembling to something like
mov x0, x5
mov x1, x6
mov x2, x7
And then after that, compiler-generated code will store the values back to memory in some unspecified order. That order depends on how it chooses to generate code for the surrounding C. This is probably why changing the surrounding code changes the behaviour.
One solution might be "=&m" constraints with str instructions, so your asm does actually store to memory. str %[val1], %[reg1] because STR instructions take the addressing mode as the 2nd operand, even though it's the destination.
Why can't you use volatile uint64_t* = register_1; like a normal person, to have the compiler emit store instructions that it's not allowed to reorder or optimize away? MMIO is exactly what volatile is for.
Doesn't Linux have macros or functions for doing MMIO loads/stores?
If you're having problem with inline asm, step 1 in debugging should be to look at the actual asm emitted by the compiler when it filled in the asm template, and the surrounding code.
Then single-step by instructions through the code (with GDB stepi, maybe in layout reg mode).

CUDA profiler reports inefficient global memory access

I have a simple CUDA kernel which I thought was accessing global memory efficiently. The Nvidia profiler however reports that I am performing inefficient global memory accesses. My kernel code is:
__global__ void update_particles_kernel
(
float4 *pos,
float4 *vel,
float4 *acc,
float dt,
int numParticles
)
{
int index = threadIdx.x + blockIdx.x * blockDim.x;
int offset = 0;
while(index + offset < numParticles)
{
vel[index + offset].x += dt*acc[index + offset].x; // line 247
vel[index + offset].y += dt*acc[index + offset].y;
vel[index + offset].z += dt*acc[index + offset].z;
pos[index + offset].x += dt*vel[index + offset].x; // line 251
pos[index + offset].y += dt*vel[index + offset].y;
pos[index + offset].z += dt*vel[index + offset].z;
offset += blockDim.x * gridDim.x;
}
In particular the profiler reports the following:
From the CUDA best practices guide it says:
"For devices of compute capability 2.x, the requirements can be summarized quite easily: the concurrent accesses of the threads of a warp will coalesce into a number of transactions equal to the number of cache lines necessary to service all of the threads of the warp. By default, all accesses are cached through L1, which as 128-byte lines. For scattered access patterns, to reduce overfetch, it can sometimes be useful to cache only in L2, which caches shorter 32-byte segments (see the CUDA C Programming Guide).
For devices of compute capability 3.x, accesses to global memory are cached only in L2; L1 is reserved for local memory accesses. Some devices of compute capability 3.5, 3.7, or 5.2 allow opt-in caching of globals in L1 as well."
Now in my kernel based on this information I would expect that 16 accesses would be required to service a 32 thread warp because float4 is 16 bytes and on my card (770m compute capability 3.0) reads from the L2 cache are performed in 32 bytes chunks (16 bytes * 32 threads / 32 bytes cache lines = 16 accesses). Indeed as you can see the profiler reports that I am doing 16 access. What I don't understand is why the profiler reports that the ideal access would involve 8 L2 transactions per access for line 247 and only 4 L2 transactions per access for the remaining lines. Can someone explain what I am missing here?
I have a simple CUDA kernel which I thought was accessing global memory efficiently. The Nvidia profiler however reports that I am performing inefficient global memory accesses.
To take one example, your float4 vel array is stored in memory like this:
0.x 0.y 0.z 0.w 1.x 1.y 1.z 1.w 2.x 2.y 2.z 2.w 3.x 3.y 3.z 3.w ...
^ ^ ^ ^ ...
thread0 thread1 thread2 thread3
So when you do this:
vel[index + offset].x += ...; // line 247
you are accessing (storing) at the locations (.x) that I have marked above. The gaps in between each ^ mark indicate an inefficient access pattern, which the profiler is pointing out. (It does not matter that in the very next line of code, you are storing to the .y locations.)
There are at least 2 solutions, one of which would be a classical AoS -> SoA reorganization of your data, with appropriate code adjustments. This is well documented (e.g. here on the cuda tag and elsewhere) in terms of what it means, and how to do it, so I will let you look that up.
The other typical solution is to load a float4 quantity per thread, when you need it, and store a float4 quantity per thread, when you need to. Your code can be trivially reworked to do this, which should give improved profiling results:
//preceding code need not change
while(index + offset < numParticles)
{
float4 my_vel = vel[index + offset];
float4 my_acc = acc[index + offset];
my_vel.x += dt*my_acc.x;
my_vel.y += dt*my_acc.y;
my_vel.z += dt*my_acc.z;
vel[index + offset] = my_vel;
float4 my_pos = pos[index + offset];
my_pos.x += dt*my_vel.x;
my_pos.y += dt*my_vel.y;
my_pos.z += dt*my_vel.z;
pos[index + offset] = my_pos;
offset += blockDim.x * gridDim.x;
}
Even though you might think that this code is "less efficient" than your code, because your code "appears" to be only loading and storing .x, .y, .z, whereas mine "appears" to also load and store .w, in fact there is essentially no difference, due to the way a GPU loads and stores to/from global memory. Although your code does not appear to touch .w, in the process of accessing the adjacent elements, the GPU will load the .w elements from global memory, and also (eventually) store the .w elements back to global memory.
What I don't understand is why the profiler reports that the ideal access would involve 8 L2 transactions per access for line 247
For line 247 in your original code, you are accessing one float quantity per thread for the load operation of acc.x, and one float quantity per thread for the load operation of vel.x. A float quantity per thread by itself should require 128 bytes for a warp, which is 4 32-byte L2 cachelines. Two loads together would require 8 L2 cacheline loads. This is the ideal case, which assumes that the quantities are packed together nicely (SoA). But that is not what you have (you have AoS).

Faster lookup tables using AVX2

I'm trying to speed up an algorithm which performs a series of lookup tables. I'd like to use SSE2 or AVX2. I've tried using the _mm256_i32gather_epi32 command but it is 31% slower. Does anyone have any suggestions to any improvements or a different approach?
Timings:
C code = 234
Gathers = 340
static const int32_t g_tables[2][64]; // values between 0 and 63
template <int8_t which, class T>
static void lookup_data(int16_t * dst, T * src)
{
const int32_t * lut = g_tables[which];
// Leave this code for Broadwell or Skylake since it's 31% slower than C code
// (gather is 12 for Haswell, 7 for Broadwell and 5 for Skylake)
#if 0
if (sizeof(T) == sizeof(int16_t)) {
__m256i avx0, avx1, avx2, avx3, avx4, avx5, avx6, avx7;
__m128i sse0, sse1, sse2, sse3, sse4, sse5, sse6, sse7;
__m256i mask = _mm256_set1_epi32(0xffff);
avx0 = _mm256_loadu_si256((__m256i *)(lut));
avx1 = _mm256_loadu_si256((__m256i *)(lut + 8));
avx2 = _mm256_loadu_si256((__m256i *)(lut + 16));
avx3 = _mm256_loadu_si256((__m256i *)(lut + 24));
avx4 = _mm256_loadu_si256((__m256i *)(lut + 32));
avx5 = _mm256_loadu_si256((__m256i *)(lut + 40));
avx6 = _mm256_loadu_si256((__m256i *)(lut + 48));
avx7 = _mm256_loadu_si256((__m256i *)(lut + 56));
avx0 = _mm256_i32gather_epi32((int32_t *)(src), avx0, 2);
avx1 = _mm256_i32gather_epi32((int32_t *)(src), avx1, 2);
avx2 = _mm256_i32gather_epi32((int32_t *)(src), avx2, 2);
avx3 = _mm256_i32gather_epi32((int32_t *)(src), avx3, 2);
avx4 = _mm256_i32gather_epi32((int32_t *)(src), avx4, 2);
avx5 = _mm256_i32gather_epi32((int32_t *)(src), avx5, 2);
avx6 = _mm256_i32gather_epi32((int32_t *)(src), avx6, 2);
avx7 = _mm256_i32gather_epi32((int32_t *)(src), avx7, 2);
avx0 = _mm256_and_si256(avx0, mask);
avx1 = _mm256_and_si256(avx1, mask);
avx2 = _mm256_and_si256(avx2, mask);
avx3 = _mm256_and_si256(avx3, mask);
avx4 = _mm256_and_si256(avx4, mask);
avx5 = _mm256_and_si256(avx5, mask);
avx6 = _mm256_and_si256(avx6, mask);
avx7 = _mm256_and_si256(avx7, mask);
sse0 = _mm_packus_epi32(_mm256_castsi256_si128(avx0), _mm256_extracti128_si256(avx0, 1));
sse1 = _mm_packus_epi32(_mm256_castsi256_si128(avx1), _mm256_extracti128_si256(avx1, 1));
sse2 = _mm_packus_epi32(_mm256_castsi256_si128(avx2), _mm256_extracti128_si256(avx2, 1));
sse3 = _mm_packus_epi32(_mm256_castsi256_si128(avx3), _mm256_extracti128_si256(avx3, 1));
sse4 = _mm_packus_epi32(_mm256_castsi256_si128(avx4), _mm256_extracti128_si256(avx4, 1));
sse5 = _mm_packus_epi32(_mm256_castsi256_si128(avx5), _mm256_extracti128_si256(avx5, 1));
sse6 = _mm_packus_epi32(_mm256_castsi256_si128(avx6), _mm256_extracti128_si256(avx6, 1));
sse7 = _mm_packus_epi32(_mm256_castsi256_si128(avx7), _mm256_extracti128_si256(avx7, 1));
_mm_storeu_si128((__m128i *)(dst), sse0);
_mm_storeu_si128((__m128i *)(dst + 8), sse1);
_mm_storeu_si128((__m128i *)(dst + 16), sse2);
_mm_storeu_si128((__m128i *)(dst + 24), sse3);
_mm_storeu_si128((__m128i *)(dst + 32), sse4);
_mm_storeu_si128((__m128i *)(dst + 40), sse5);
_mm_storeu_si128((__m128i *)(dst + 48), sse6);
_mm_storeu_si128((__m128i *)(dst + 56), sse7);
}
else
#endif
{
for (int32_t i = 0; i < 64; i += 4)
{
*dst++ = src[*lut++];
*dst++ = src[*lut++];
*dst++ = src[*lut++];
*dst++ = src[*lut++];
}
}
}
You're right that gather is slower than a PINSRD loop on Haswell. It's probably nearly break-even on Broadwell. (See also the x86 tag wiki for perf links, especially Agner Fog's insn tables, microarch pdf, and optimization guide)
If your indices are small, or you can slice them up, pshufb can be used as parallel LUT with 4bit indices. It gives you sixteen 8bit table entries, but you can use stuff like punpcklbw to combine two vectors of byte results into one vector of 16bit results. (Separate tables for high and low halves of the LUT entries, with the same 4bit indices).
This kind of technique gets used for Galois Field multiplies, when you want to multiply every element of a big buffer of GF16 values by the same value. (e.g. for Reed-Solomon error correction codes.) Like I said, taking advantage of this requires taking advantage of special properties of your use-case.
AVX2 can do two 128b pshufbs in parallel, in each lane of a 256b vector. There is nothing better until AVX512F: __m512i _mm512_permutex2var_epi32 (__m512i a, __m512i idx, __m512i b). There are byte (vpermi2b in AVX512VBMI), word (vpermi2w in AVX512BW), dword (this one, vpermi2d in AVX512F), and qword (vpermi2q in AVX512F) element size versions. This is a full cross-lane shuffle, indexing into two concatenated source registers. (Like AMD XOP's vpperm).
The two different instructions behind the one intrinsic (vpermt2d / vpermi2d) give you a choice of overwriting the table with the result, or overwriting the index vector. The compiler will pick based on which inputs are reused.
Your specific case:
*dst++ = src[*lut++];
The lookup-table is actually src, not the variable you've called lut. lut is actually walking through an array which is used as a shuffle-control mask for src.
You should make g_tables an array of uint8_t for best performance. The entries are only 0..63, so they fit. Zero-extending loads into full registers are as cheap as normal loads, so it just reduces the cache footprint. To use it with AVX2 gathers, use vpmovzxbd. The intrinsic is frustratingly difficult to use as a load, because there's no form that takes an int64_t *, only __m256i _mm256_cvtepu8_epi32 (__m128i a) which takes a __m128i. This is one of the major design flaws with intrinsics, IMO.
I don't have any great ideas for speeding up your loop. Scalar code is probably the way to go here. The SIMD code shuffles 64 int16_t values into a new destination, I guess. It took me a while to figure that out, because I didn't find the if (sizeof...) line right away, and there are no comments. :( It would be easier to read if you used sane variable names, not avx0... Using x86 gather instructions for elements smaller than 4B certainly requires annoying masking. However, instead of pack, you could use a shift and OR.
You could make an AVX512 version for sizeof(T) == sizeof(int8_t) or sizeof(T) == sizeof(int16_t), because all of src will fit into one or two zmm registers.
If g_tables was being used as a LUT, AVX512 could do it easily, with vpermi2b. You'd have a hard time with out AVX512, though, because a 64 byte table is too big for pshufb. Using four lanes (16B) of pshufb for each input lane could work: Mask off indices outside 0..15, then indices outside 16..31, etc, with pcmpgtb or something. Then you have to OR all four lanes together. So this sucks a lot.
possible speedups: design the shuffle by hand
If you're willing to design a shuffle by hand for a specific value of g_tables, there are potential speedups that way. Load a vector from src, shuffle it with a compile-time constant pshufb or pshufd, then store any contiguous blocks in one go. (Maybe with pextrd or pextrq, or even better movq from the bottom of the vector. Or even a full-vector movdqu).
Actually, loading multiple src vectors and shuffling between them is possible with shufps. It works fine on integer data, with no slowdowns except on Nehalem (and maybe also on Core2). punpcklwd / dq / qdq (and the corresponding punpckhwd etc) can interleave elements of vectors, and give different choices for data movement than shufps.
If it doesn't take too many instructions to construct a few full 16B vectors, you're in good shape.
If g_tables can take on too many possible values, it might be possible to JIT-compile a custom shuffle function. This is probably really hard to do well, though.

Why do x64 projects use a default packing alignment of 16?

If you compile the following code in a x64 project in VS2012 without any /Zp flags:
#pragma pack(show)
then the compiler will spit out:
value of pragma pack(show) == 16
If the project uses Win32 then, the compiler will spit out:
value of pragma pack(show) == 8
What I don't understand is that the largest natural alignment of any type (ie: long long and pointer) in Win64 is 8. So why not just make the default alignment 8 for x64?
Somewhat related to that, why would anyone ever use /Zp16?
EDIT:
Here's an example to show what I'm talking about. Even though pointers have a natural alignment of 8 bytes for x64, Zp1 can force them to a 1 byte boundary.
struct A
{
char a;
char* b;
}
// Zp16
// Offset of a == 0
// Offset of b == 8
// Zp1
// Offset of a == 0
// Offset of b == 1
Now if we take an example that uses SSE:
struct A
{
char a;
char* b;
__m128 c; // uses declspec(align(16)) in xmmintrinsic.h
}
// Zp16
// Offset of a == 0
// Offset of b == 8
// Offset of c == 16
// Zp1
// Offset of a == 0
// Offset of b == 1
// Offset of c == 16
If __m128 were truly a builtin type, then I'd expect the offset to be 9 with Zp1. But since it uses __declspec(align(16)) in its definition in xmmintrinsic.h, that trumps any Zp settings.
So here's my question worded a little differently: is there a type for 'c' that has a natural alignment of 16B but will have an offset of 9 in the previous example?
The MSDN page here includes the following relevant information about your question "why not make the default alignment 8 for x64?":
Writing applications that use the latest processor instructions introduces some new constraints and issues. In particular, many new instructions require that data must be aligned to 16-byte boundaries. Additionally, by aligning frequently used data to the cache line size of a specific processor, you improve cache performance. For example, if you define a structure whose size is less than 32 bytes, you may want to align it to 32 bytes to ensure that objects of that structure type are efficiently cached.
Why do x64 projects use a default packing alignment of 16?
On x64 the floating point is performed in the SSE unit. You state that the largest type has alignment 8. But that is not correct. Some of the SSE intrinsic types, for example __m128, have alignment of 16.

Poor LLVM JIT performance

I have a legacy C++ application that constructs a tree of C++ objects. I want to use LLVM to call class constructors to create said tree. The generated LLVM code is fairly straight-forward and looks like repeated sequences of:
; ...
%11 = getelementptr [11 x i8*]* %Value_array1, i64 0, i64 1
%12 = call i8* #T_string_M_new_A_2Pv(i8* %heap, i8* getelementptr inbounds ([10 x i8]* #0, i64 0, i64 0))
%13 = call i8* #T_QueryLoc_M_new_A_2Pv4i(i8* %heap, i8* %12, i32 1, i32 1, i32 4, i32 5)
%14 = call i8* #T_GlobalEnvironment_M_getItemFactory_A_Pv(i8* %heap)
%15 = call i8* #T_xs_integer_M_new_A_Pvl(i8* %heap, i64 2)
%16 = call i8* #T_ItemFactory_M_createInteger_A_3Pv(i8* %heap, i8* %14, i8* %15)
%17 = call i8* #T_SingletonIterator_M_new_A_4Pv(i8* %heap, i8* %2, i8* %13, i8* %16)
store i8* %17, i8** %11, align 8
; ...
Where each T_ function is a C "thunk" that calls some C++ constructor, e.g.:
void* T_string_M_new_A_2Pv( void *v_value ) {
string *const value = static_cast<string*>( v_value );
return new string( value );
}
The thunks are necessary, of course, because LLVM knows nothing about C++. The T_ functions are added to the ExecutionEngine in use via ExecutionEngine::addGlobalMapping().
When this code is JIT'd, the performance of the JIT'ing itself is very poor. I've generated a call-graph using kcachegrind. I don't understand all the numbers (and this PDF seems not to include commas where it should), but if you look at the left fork, the bottom two ovals, Schedule... is called 16K times and setHeightToAtLeas... is called 37K times. On the right fork, RAGreed... is called 35K times.
Those are far too many calls to anything for what's mostly a simple sequence of call LLVM instructions. Something seems horribly wrong.
Any ideas on how to improve the performance of the JIT'ing?
Another order of magnitude is unlikely to happen without a huge change in how the JIT works or looking at the particular calls you're trying to JIT. You could enable -fast-isel-verbose on llc -O0 (e.g. llc -O0 -fast-isel-verbose mymodule.[ll,bc]) and get it to tell you if it's falling back to the selection dag for instruction generation. You may want to profile again and see what the current hot spots are.

Resources