I compiled a libsvm benchmarking app which does svm_predict() 100 times on the same image using the same model. The libsvm is compiled statically (MSVC 2017) by directly including svm.cpp and svm.h in my project.
EDIT: adding benchmark details
for (int i = 0; i < counter; i++)
{
std::chrono::high_resolution_clock::time_point t1 = std::chrono::high_resolution_clock::now();
double label = svm_predict(model, input);
std::chrono::high_resolution_clock::time_point t2 = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::microseconds>(t2 - t1).count();
total_time += duration;
std::cout << "\n\n\n" << sum << " label:" << label << " duration:" << duration << "\n\n\n";
}
This is the loop that I benchmark without any major modifications to the libsvm code.
After 100 runs the average of one run is 4.7 ms with no difference if I use or not AVX instructions. To make sure the compiler generates the correct instructions I used Intel Software Development Emulator to check the instructions mix
with AVX:
*isa-ext-AVX 36578280
*isa-ext-SSE 4
*isa-ext-SSE2 4
*isa-set-SSE 4
*isa-set-SSE2 4
*scalar-simd 36568174
*sse-scalar 4
*sse-packed 4
*avx-scalar 36568170
*avx128 8363
*avx256 1765
The other part
without AVX:
*isa-ext-SSE 11781
*isa-ext-SSE2 36574119
*isa-set-SSE 11781
*isa-set-SSE2 36574119
*scalar-simd 36564559
*sse-scalar 36564559
*sse-packed 21341
I would expect to get some performance improvment I know that avx128/256/512 are not used that much but still. I have a i7-8550U CPU, do you think that if run the same test on a skylake i9 X series I would see a bigger difference ?
EDIT
I added the instruction mix for each binary
With AVX:
ADD 16868725
AND 49
BT 6
CALL_NEAR 14032515
CDQ 4
CDQE 3601
CMOVLE 6
CMOVNZ 2
CMOVO 12
CMOVZ 6
CMP 25417120
CMPXCHG_LOCK 1
CPUID 3
CQO 12
DEC 68
DIV 1
IDIV 12
IMUL 3621
INC 8496372
JB 325
JBE 5
JL 7101
JLE 38338
JMP 8416984
JNB 6
JNBE 3
JNL 806
JNLE 61
JNS 1
JNZ 22568320
JS 2
JZ 8465164
LEA 16829868
MOV 42209230
MOVSD_XMM 4
MOVSXD 1141
MOVUPS 4
MOVZX 3684
MUL 12
NEG 72
NOP 4219
NOT 1
OR 14
POP 1869
PUSH 1870
REP_STOSD 6
RET_NEAR 1758
ROL 5
ROR 10
SAR 8
SBB 5
SETNZ 4
SETZ 26
SHL 1626
SHR 519
SUB 6530
TEST 5616533
VADDPD 594
VADDSD 8445597
VCOMISD 3
VCVTSI2SD 3603
VEXTRACTF128 6
VFMADD132SD 12
VFMADD231SD 6
VHADDPD 6
VMOVAPD 12
VMOVAPS 2375
VMOVDQU 1
VMOVSD 11256384
VMOVUPD 582
VMULPD 582
VMULSD 8451540
VPXOR 1
VSUBSD 8407425
VUCOMISD 3600
VXORPD 2362
VXORPS 3603
VZEROUPPER 4
XCHG 8
XGETBV 1
XOR 8414763
*total 213991340
Part2
No AVX:
ADD 16869910
ADDPD 1176
ADDSD 8445609
AND 49
BT 6
CALL_NEAR 14032515
CDQ 4
CDQE 3601
CMOVLE 6
CMOVNZ 2
CMOVO 12
CMOVZ 6
CMP 25417408
CMPXCHG_LOCK 1
COMISD 3
CPUID 3
CQO 12
CVTDQ2PD 3603
DEC 68
DIV 1
IDIV 12
IMUL 3621
INC 8496369
JB 325
JBE 5
JL 7392
JLE 38338
JMP 8416984
JNB 6
JNBE 3
JNL 803
JNLE 61
JNS 1
JNZ 22568317
JS 2
JZ 8465164
LEA 16829548
MOV 42209235
MOVAPS 7073
MOVD 3603
MOVDQU 2
MOVSD_XMM 11256376
MOVSXD 1141
MOVUPS 2344
MOVZX 3684
MUL 12
MULPD 1170
MULSD 8451546
NEG 72
NOP 4159
NOT 1
OR 14
POP 1865
PUSH 1866
REP_STOSD 6
RET_NEAR 1758
ROL 5
ROR 10
SAR 8
SBB 5
SETNZ 4
SETZ 26
SHL 1626
SHR 516
SUB 6515
SUBSD 8407425
TEST 5616533
UCOMISD 3600
UNPCKHPD 6
XCHG 8
XGETBV 1
XOR 8414745
XORPS 2364
*total 214000270
Almost all arithmetic instructions you are listing work on scalars e.g., (V)SUBSD means SUBstract Scalar Double. The V in front essentially just means that AVX encoding is used (this also clears the upper half of the register, which the SSE instructions don't do). But given the instructions you listed, there should be barely any runtime difference.
Modern x86 uses SSE1/2 or AVX for scalar FP math, using just the low element of XMM vector registers. It's somewhat better than x87 (more registers, and flat register set), but it's still only one result per instruction.
There are a few thousand packed SIMD instructions, vs. ~36 million scalar instructions, so only a relatively unimportant part of the code got auto-vectorized and could benefit from 256-bit vectors.
Related
Consider this C code and the generated (by GCC) assembler code for it:
+ cat x.c
1 struct S
2 {
3 int a, b, c, d, e;
4 };
5 void foo(struct S *s)
6 {
7 s->a = 1;
8 s->b = 2;
9 s->c = 3;
10 s->d = 4;
11 s->e = 5;
12 }
build with: gcc -O3 -S x.c. (Output trimmed of some assembler directives)
+ cat x.s
18 foo:
21 movdqa .LC0(%rip), %xmm0
22 movl $5, 16(%rdi)
23 movups %xmm0, (%rdi)
24 ret
28 .section .rodata.cst16,"aM",#progbits,16
29 .align 16
30 .LC0:
31 .long 1
32 .long 2
33 .long 3
34 .long 4
35 .ident "GCC: (GNU) 9.2.1 20190827 (Red Hat 9.2.1-1)"
At line 21, a single instruction loads the 16 bytes containing the values for fields a through d as data.
It seems unintuitive to me that typical performance would be better than doing four immediate store instructions. Wouldn't a stall on a (data) cache load be more likely?
(I believe that clang/LLVM also optimizes for x86 in this manner.)
Consider the following variable reference in x64 Intel assembly, where the variable a is declared in the .data section:
mov eax, dword ptr [rip + _a]
I have trouble understanding how this variable reference works. Since a is a symbol corresponding to the runtime address of the variable (with relocation), how can [rip + _a] dereference the correct memory location of a? Indeed, rip holds the address of the current instruction, which is a large positive integer, so the addition results in an incorrect address of a?
Conversely, if I use x86 syntax (which is very intuitive):
mov eax, dword ptr [_a]
, I get the following error: 32-bit absolute addressing is not supported in 64-bit mode.
Any explanation?
1 int a = 5;
2
3 int main() {
4 int b = a;
5 return b;
6 }
Compilation: gcc -S -masm=intel abs_ref.c -o abs_ref:
1 .section __TEXT,__text,regular,pure_instructions
2 .build_version macos, 10, 14
3 .intel_syntax noprefix
4 .globl _main ## -- Begin function main
5 .p2align 4, 0x90
6 _main: ## #main
7 .cfi_startproc
8 ## %bb.0:
9 push rbp
10 .cfi_def_cfa_offset 16
11 .cfi_offset rbp, -16
12 mov rbp, rsp
13 .cfi_def_cfa_register rbp
14 mov dword ptr [rbp - 4], 0
15 mov eax, dword ptr [rip + _a]
16 mov dword ptr [rbp - 8], eax
17 mov eax, dword ptr [rbp - 8]
18 pop rbp
19 ret
20 .cfi_endproc
21 ## -- End function
22 .section __DATA,__data
23 .globl _a ## #a
24 .p2align 2
25 _a:
26 .long 5 ## 0x5
27
28
29 .subsections_via_symbols
GAS syntax for RIP-relative addressing looks like symbol + current_address (RIP), but it actually means symbol with respect to RIP.
There's an inconsistency with numeric literals:
[rip + 10] or AT&T 10(%rip) means 10 bytes past the end of this instruction
[rip + a] or AT&T a(%rip) means to calculate a rel32 displacement to reach a, not RIP + symbol value. (The GAS manual documents this special interpretation)
[a] or AT&T a is an absolute address, using a disp32 addressing mode. This isn't supported on OS X, where the image base address is always outside the low 32 bits. (Or for mov to/from al/ax/eax/rax, a 64-bit absolute moffs encoding is available, but you don't want that).
Linux position-dependent executables do put static code/data in the low 31 bits (2GiB) of virtual address space, so you can/should use mov edi, sym there, but on OS X your best option is lea rdi, [sym+RIP] if you need an address in a register. Unable to move variables in .data to registers with Mac x86 Assembly.
(In OS X, the convention is that C variable/function names are prepended with _ in asm. In hand-written asm you don't have to do this for symbols you don't want to access from C.)
NASM is much less confusing in this respect:
[rel a] means RIP-relative addressing for [a]
[abs a] means [disp32].
default rel or default abs sets what's used for [a]. The default is (unfortunately) default abs, so you almost always want a default rel.
Example with .set symbol values vs. a label
.intel_syntax noprefix
mov dword ptr [sym + rip], 0x11111111
sym:
.equ x, 8
inc byte ptr [x + rip]
.set y, 32
inc byte ptr [y + rip]
.set z, sym
inc byte ptr [z + rip]
gcc -nostdlib foo.s && objdump -drwC -Mintel a.out (on Linux; I don't have OS X):
0000000000001000 <sym-0xa>:
1000: c7 05 00 00 00 00 11 11 11 11 mov DWORD PTR [rip+0x0],0x11111111 # 100a <sym> # rel32 = 0; it's from the end of the instruction not the end of the rel32 or anywhere else.
000000000000100a <sym>:
100a: fe 05 08 00 00 00 inc BYTE PTR [rip+0x8] # 1018 <sym+0xe>
1010: fe 05 20 00 00 00 inc BYTE PTR [rip+0x20] # 1036 <sym+0x2c>
1016: fe 05 ee ff ff ff inc BYTE PTR [rip+0xffffffffffffffee] # 100a <sym>
(Disassembling the .o with objdump -dr will show you that there aren't any relocations for the linker to fill in, they were all done at assemble time.)
Notice that only .set z, sym resulted in a with-respect-to calculation. x and y were original from plain numeric literals, not labels, so even though the instruction itself used [x + RIP], we still got [RIP + 8].
(Linux non-PIE only): To address absolute 8 wrt. RIP, you'd need AT&T syntax incb 8-.(%rip). I don't know how to write that in GAS intel_syntax; [8 - . + RIP] is rejected with Error: invalid operands (*ABS* and .text sections) for '-'.
Of course you can't do that anyway on OS X, except maybe for absolute addresses that are in range of the image base. But there's probably no relocation that can hold the 64-bit absolute address to be calculated for a 32-bit rel32.
Related:
How to load address of function or label into register AT&T version of this
32-bit absolute addresses no longer allowed in x86-64 Linux? PIE vs. non-PIE executables, when you have to use position-independent code.
I have written a benchmark to compute memory bandwidth:
#include <benchmark/benchmark.h>
double sum_array(double* v, long n)
{
double s = 0;
for (long i =0 ; i < n; ++i) {
s += v[i];
}
return s;
}
void BM_MemoryBandwidth(benchmark::State& state) {
long n = state.range(0);
double* v = (double*) malloc(state.range(0)*sizeof(double));
for (auto _ : state) {
benchmark::DoNotOptimize(sum_array(v, n));
}
free(v);
state.SetComplexityN(state.range(0));
state.SetBytesProcessed(int64_t(state.range(0))*int64_t(state.iterations())*sizeof(double));
}
BENCHMARK(BM_MemoryBandwidth)->RangeMultiplier(2)->Range(1<<5, 1<<23)->Complexity(benchmark::oN);
BENCHMARK_MAIN();
I compile with
g++-9 -masm=intel -fverbose-asm -S -g -O3 -ffast-math -march=native --std=c++17 -I/usr/local/include memory_bandwidth.cpp
This produces a bunch of moves from RAM, and then some addpd instructions which perf says are hot, so I go into the generated asm and remove them, then assemble and link via
$ g++-9 -c memory_bandwidth.s -o memory_bandwidth.o
$ g++-9 memory_bandwidth.o -o memory_bandwidth.x -L/usr/local/lib -lbenchmark -lbenchmark_main -pthread -fPIC
At this point, get a perf output that I expect: Movement of data into xmm registers, increments of the pointer, and a jmp at the end of the loop:
All fine and well up to here. Now here's where things get weird:
I inquire of my hardware what the memory bandwidth is:
$ sudo lshw -class memory
*-memory
description: System Memory
physical id: 3c
slot: System board or motherboard
size: 16GiB
*-bank:1
description: DIMM DDR4 Synchronous 2400 MHz (0.4 ns)
vendor: AMI
physical id: 1
slot: ChannelA-DIMM1
size: 8GiB
width: 64 bits
clock: 2400MHz (0.4ns)
So I should be getting at most 8 bytes * 2.4 GHz = 19.2 gigabytes/second.
But instead I get 48 gigabytes/second:
-------------------------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
-------------------------------------------------------------------------------------
BM_MemoryBandwidth/32 6.43 ns 6.43 ns 108045392 bytes_per_second=37.0706G/s
BM_MemoryBandwidth/64 11.6 ns 11.6 ns 60101462 bytes_per_second=40.9842G/s
BM_MemoryBandwidth/128 21.4 ns 21.4 ns 32667394 bytes_per_second=44.5464G/s
BM_MemoryBandwidth/256 47.6 ns 47.6 ns 14712204 bytes_per_second=40.0884G/s
BM_MemoryBandwidth/512 86.9 ns 86.9 ns 8057225 bytes_per_second=43.9169G/s
BM_MemoryBandwidth/1024 165 ns 165 ns 4233063 bytes_per_second=46.1437G/s
BM_MemoryBandwidth/2048 322 ns 322 ns 2173012 bytes_per_second=47.356G/s
BM_MemoryBandwidth/4096 636 ns 636 ns 1099074 bytes_per_second=47.9781G/s
BM_MemoryBandwidth/8192 1264 ns 1264 ns 553898 bytes_per_second=48.3047G/s
BM_MemoryBandwidth/16384 2524 ns 2524 ns 277224 bytes_per_second=48.3688G/s
BM_MemoryBandwidth/32768 5035 ns 5035 ns 138843 bytes_per_second=48.4882G/s
BM_MemoryBandwidth/65536 10058 ns 10058 ns 69578 bytes_per_second=48.5455G/s
BM_MemoryBandwidth/131072 20103 ns 20102 ns 34832 bytes_per_second=48.5802G/s
BM_MemoryBandwidth/262144 40185 ns 40185 ns 17420 bytes_per_second=48.6035G/s
BM_MemoryBandwidth/524288 80351 ns 80347 ns 8708 bytes_per_second=48.6171G/s
BM_MemoryBandwidth/1048576 160855 ns 160851 ns 4353 bytes_per_second=48.5699G/s
BM_MemoryBandwidth/2097152 321657 ns 321643 ns 2177 bytes_per_second=48.5787G/s
BM_MemoryBandwidth/4194304 648490 ns 648454 ns 1005 bytes_per_second=48.1915G/s
BM_MemoryBandwidth/8388608 1307549 ns 1307485 ns 502 bytes_per_second=47.8017G/s
BM_MemoryBandwidth_BigO 0.16 N 0.16 N
BM_MemoryBandwidth_RMS 1 % 1 %
What am I misunderstanding about memory bandwidth that has made my calculations come out wrong by more than a factor of 2?
(Also, this is kinda an insane workflow to empirically determine how much memory bandwidth I have. Is there a better way?)
Full asm for sum_array after removing add instructions:
_Z9sum_arrayPdl:
.LVL0:
.LFB3624:
.file 1 "example_code/memory_bandwidth.cpp"
.loc 1 5 1 view -0
.cfi_startproc
.loc 1 6 5 view .LVU1
.loc 1 7 5 view .LVU2
.LBB1545:
# example_code/memory_bandwidth.cpp:7: for (long i =0 ; i < n; ++i) {
.loc 1 7 24 is_stmt 0 view .LVU3
test rsi, rsi # n
jle .L7 #,
lea rax, -1[rsi] # tmp105,
cmp rax, 1 # tmp105,
jbe .L8 #,
mov rdx, rsi # bnd.299, n
shr rdx # bnd.299
sal rdx, 4 # tmp107,
mov rax, rdi # ivtmp.311, v
add rdx, rdi # _44, v
pxor xmm0, xmm0 # vect_s_10.306
.LVL1:
.p2align 4,,10
.p2align 3
.L5:
.loc 1 8 9 is_stmt 1 discriminator 2 view .LVU4
# example_code/memory_bandwidth.cpp:8: s += v[i];
.loc 1 8 11 is_stmt 0 discriminator 2 view .LVU5
movupd xmm2, XMMWORD PTR [rax] # tmp115, MEM[base: _24, offset: 0B]
add rax, 16 # ivtmp.311,
.loc 1 8 11 discriminator 2 view .LVU6
cmp rax, rdx # ivtmp.311, _44
jne .L5 #,
movapd xmm1, xmm0 # tmp110, vect_s_10.306
unpckhpd xmm1, xmm0 # tmp110, vect_s_10.306
mov rax, rsi # tmp.301, n
and rax, -2 # tmp.301,
test sil, 1 # n,
je .L10 #,
.L3:
.LVL2:
.loc 1 8 9 is_stmt 1 view .LVU7
# example_code/memory_bandwidth.cpp:8: s += v[i];
.loc 1 8 11 is_stmt 0 view .LVU8
addsd xmm0, QWORD PTR [rdi+rax*8] # <retval>, *_3
.LVL3:
# example_code/memory_bandwidth.cpp:7: for (long i =0 ; i < n; ++i) {
.loc 1 7 5 view .LVU9
inc rax # i
.LVL4:
# example_code/memory_bandwidth.cpp:7: for (long i =0 ; i < n; ++i) {
.loc 1 7 24 view .LVU10
cmp rsi, rax # n, i
jle .L1 #,
.loc 1 8 9 is_stmt 1 view .LVU11
# example_code/memory_bandwidth.cpp:8: s += v[i];
.loc 1 8 11 is_stmt 0 view .LVU12
addsd xmm0, QWORD PTR [rdi+rax*8] # <retval>, *_6
.LVL5:
.loc 1 8 11 view .LVU13
ret
.LVL6:
.p2align 4,,10
.p2align 3
.L7:
.loc 1 8 11 view .LVU14
.LBE1545:
# example_code/memory_bandwidth.cpp:6: double s = 0;
.loc 1 6 12 view .LVU15
pxor xmm0, xmm0 # <retval>
.loc 1 10 5 is_stmt 1 view .LVU16
.LVL7:
.L1:
# example_code/memory_bandwidth.cpp:11: }
.loc 1 11 1 is_stmt 0 view .LVU17
ret
.p2align 4,,10
.p2align 3
.L10:
.loc 1 11 1 view .LVU18
ret
.LVL8:
.L8:
.LBB1546:
# example_code/memory_bandwidth.cpp:7: for (long i =0 ; i < n; ++i) {
.loc 1 7 15 view .LVU19
xor eax, eax # tmp.301
.LBE1546:
# example_code/memory_bandwidth.cpp:6: double s = 0;
.loc 1 6 12 view .LVU20
pxor xmm0, xmm0 # <retval>
jmp .L3 #
.cfi_endproc
.LFE3624:
.size _Z9sum_arrayPdl, .-_Z9sum_arrayPdl
.section .text.startup,"ax",#progbits
.p2align 4
.globl main
.type main, #function
Full output of lshw -class memory:
*-firmware
description: BIOS
vendor: American Megatrends Inc.
physical id: 0
version: 1.90
date: 10/21/2016
size: 64KiB
capacity: 15MiB
capabilities: pci upgrade shadowing cdboot bootselect socketedrom edd int13floppy1200 int13floppy720 int13floppy2880 int5printscreen int9keyboard int14serial int17printer acpi usb biosbootspecification uefi
*-memory
description: System Memory
physical id: 3c
slot: System board or motherboard
size: 16GiB
*-bank:0
description: [empty]
physical id: 0
slot: ChannelA-DIMM0
*-bank:1
description: DIMM DDR4 Synchronous 2400 MHz (0.4 ns)
product: CMU16GX4M2A2400C16
vendor: AMI
physical id: 1
serial: 00000000
slot: ChannelA-DIMM1
size: 8GiB
width: 64 bits
clock: 2400MHz (0.4ns)
*-bank:2
description: [empty]
physical id: 2
slot: ChannelB-DIMM0
*-bank:3
description: DIMM DDR4 Synchronous 2400 MHz (0.4 ns)
product: CMU16GX4M2A2400C16
vendor: AMI
physical id: 3
serial: 00000000
slot: ChannelB-DIMM1
size: 8GiB
width: 64 bits
clock: 2400MHz (0.4ns)
Is the CPU relevant here? Well here's the specs:
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 94
Model name: Intel(R) Pentium(R) CPU G4400 # 3.30GHz
Stepping: 3
CPU MHz: 3168.660
CPU max MHz: 3300.0000
CPU min MHz: 800.0000
BogoMIPS: 6624.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 3072K
NUMA node0 CPU(s): 0,1
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust erms invpcid rdseed smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm arat pln pts hwp hwp_notify hwp_act_window hwp_epp flush_l1d
The data produced by the clang compile is much more intelligible. The performance monotonically decreases until it hits 19.8Gb/s as the vector gets much larger than cache:
Here's the benchmark output:
It looks like from your hardware description that you have two DIMM slots that are placed into two channels. This interleaves memory between the two DIMM chips, so that memory accesses will be reading from both chips. (One possibility is that bytes 0-7 are in DIMM1 and bytes 8-15 are in DIMM2, but this depends on the hardware implementation.) This doubles the memory bandwidth because you're accessing two hardware chips instead of one.
Some systems support three or four channels, further increasing the maximum bandwidth.
Here's a very simple factorial function.
int factorial(int num) {
if (num == 0)
return 1;
return num*factorial(num-1);
}
GCC's assembly for this function on -O2 is reasonable.
factorial(int):
mov eax, 1
test edi, edi
je .L1
.L2:
imul eax, edi
sub edi, 1
jne .L2
.L1:
ret
However, on -O3 or -Ofast, it decides to make things way more complicated (almost 100 lines!):
factorial(int):
test edi, edi
je .L28
lea edx, [rdi-1]
mov ecx, edi
cmp edx, 6
jbe .L8
mov DWORD PTR [rsp-12], edi
movd xmm5, DWORD PTR [rsp-12]
mov edx, edi
xor eax, eax
movdqa xmm0, XMMWORD PTR .LC0[rip]
movdqa xmm4, XMMWORD PTR .LC2[rip]
shr edx, 2
pshufd xmm2, xmm5, 0
paddd xmm2, XMMWORD PTR .LC1[rip]
.L5:
movdqa xmm3, xmm2
movdqa xmm1, xmm2
paddd xmm2, xmm4
add eax, 1
pmuludq xmm3, xmm0
psrlq xmm1, 32
psrlq xmm0, 32
pmuludq xmm1, xmm0
pshufd xmm0, xmm3, 8
pshufd xmm1, xmm1, 8
punpckldq xmm0, xmm1
cmp eax, edx
jne .L5
movdqa xmm2, xmm0
movdqa xmm1, xmm0
mov edx, edi
psrldq xmm2, 8
psrlq xmm0, 32
and edx, -4
pmuludq xmm1, xmm2
psrlq xmm2, 32
sub edi, edx
pmuludq xmm0, xmm2
pshufd xmm1, xmm1, 8
pshufd xmm0, xmm0, 8
punpckldq xmm1, xmm0
movdqa xmm0, xmm1
psrldq xmm1, 4
pmuludq xmm0, xmm1
movd eax, xmm0
cmp ecx, edx
je .L1
lea edx, [rdi-1]
.L3:
imul eax, edi
test edx, edx
je .L1
imul eax, edx
mov edx, edi
sub edx, 2
je .L1
imul eax, edx
mov edx, edi
sub edx, 3
je .L1
imul eax, edx
mov edx, edi
sub edx, 4
je .L1
imul eax, edx
mov edx, edi
sub edx, 5
je .L1
imul eax, edx
sub edi, 6
je .L1
imul eax, edi
.L1:
ret
.L28:
mov eax, 1
ret
.L8:
mov eax, 1
jmp .L3
.LC0:
.long 1
.long 1
.long 1
.long 1
.LC1:
.long 0
.long -1
.long -2
.long -3
.LC2:
.long -4
.long -4
.long -4
.long -4
I got these results using Compiler Explorer, so it should be the same in a real-world use case.
What's up with that? Are there any cases where this would be faster? Clang seems to do something like this too, but on -O2.
imul r32,r32 has 3 cycle latency on typical modern x86 CPUs (http://agner.org/optimize/). So the scalar implementation can do one multiply per 3 clock cycles, because they're dependent. It's fully pipelined, though, so your scalar loop leaves 2/3rds of the potential throughput unused.
In 3 cycles, the pipeline in Core2 or later can feed 12 uops into the out-of-order part of the core. For small inputs, it might be best to keep the code small and let out-of-order execution overlap the dependency chain with later code, especially if that later code doesn't all depend on the factorial result. But compilers aren't good at knowing when to optimize for latency vs. throughput, and without profile-guided optimization they have no data on how large n usually is.
I suspect that gcc's auto-vectorizer isn't looking at how quickly this will overflow for large n.
A useful scalar optimization would have been unrolling with multiple accumulators, e.g. take advantage of the fact that multiplication is associative and do these in parallel in the loop: prod(n*3/4 .. n) * prod(n/2 .. n*3/4) * prod(n/4 .. n/2) * prod(1..n/4) (with non-overlapping ranges, of course). Multiplication is associative even when it wraps; the product bits only depend on bits at that position and lower, not on (discarded) high bits.
Or more simply, do f0 *= i; f1 *= i+1; f2 *= i+2; f3 *= i+3; i+=4;. And then outside the loop, return (f0*f1) * (f2*f3);. This would be a win in scalar code, too. Of course you also have to account for n % 4 != 0 when unrolling.
What gcc has chosen to do is basically the latter, using pmuludq to do 2 packed multiplies with one instruction (5c latency / 1c or 0.5c throughput on Intel CPUs) It's similar on AMD CPUs; see Agner Fog's instruction tables. Each vector loop iteration does 4 iterations of the factorial loop in your C source, and there's significant instruction-level parallelism within one iteration
The inner loop is only 12 uops long (cmp/jcc macro-fuses into 1), so it can issue at 1 iteration per 3 cycles, same throughput as the latency bottleneck in your scalar version, but doing 4x as much work per iteration.
.L5:
movdqa xmm3, xmm2 ; copy the old i vector
movdqa xmm1, xmm2
paddd xmm2, xmm4 ; [ i0, i1 | i2, i3 ] += 4
add eax, 1
pmuludq xmm3, xmm0 ; [ f0 | f2 ] *= [ i0 | i2 ]
psrlq xmm1, 32 ; bring odd 32 bit elements down to even: [ i1 | i3 ]
psrlq xmm0, 32
pmuludq xmm1, xmm0 ; [ f1 | f3 ] *= [ i1 | i3 ]
pshufd xmm0, xmm3, 8
pshufd xmm1, xmm1, 8
punpckldq xmm0, xmm1 ; merge back into [ f0 f1 f2 f3 ]
cmp eax, edx
jne .L5
So gcc wastes a whole lot of effort emulating a packed 32-bit multiply instead of leaving two separate vector accumulators separate when using pmuludq. I also looked at clang6.0. I think it's falling into the same trap. (Source+asm on the Godbolt compiler explorer)
You didn't use -march=native or anything, so only SSE2 (baseline for x86-64) is available, so only widening 32x32 => 64 bit SIMD multiplies like pmuludq are available for 32-bit input elements. SSE4.1 pmulld is 2 uops on Haswell and later (single-uop on Sandybridge), but would avoid all of gcc's stupid shuffling.
Of course there's a latency bottleneck here, too, especially because of gcc's missed optimizations increasing the length of the loop-carried dep chain involving the accumulators.
Unrolling with more vector accumulators could hide a lot of the pmuludq latency.
With good vectorization, the SIMD integer multipliers can manage 2x or 4x the throughput of the scalar integer multiply unit. (Or, with AVX2, 8x the throughput using vectors of 8x 32-bit integers.)
But the wider the vectors and the more unrolling, the more cleanup code you need.
gcc -march=haswell
We get an inner loop like this:
.L5:
inc eax
vpmulld ymm1, ymm1, ymm0
vpaddd ymm0, ymm0, ymm2
cmp eax, edx
jne .L5
Super simple, but a 10c latency loop-carried dependency chain :/ (pmulld is 2 dependent uops on Haswell and later). Unrolling with multiple accumulators can give up to a 10x throughput boost for large inputs, for 5c latency / 0.5c throughput for SIMD integer multiply uops on Skylake.
But 4 multiplies per 5 cycles is still much better than 1 per 3 for scalar.
Clang unrolls with multiple accumulators by default, so it should be good. But it's a lot of code, so I didn't analyze it by hand. Plug it into IACA or benchmark it for large inputs. (What is IACA and how do I use it?)
Efficient strategies for handling unroll epilogue:
A lookup table for factorial [0..7] is probably the best bet. Arrange things so your vector / unrolled loop does n%8 .. n, instead of 1 .. n/8*8, so the left-over part is always the same for every n.
After a horizontal vector product, do one more scalar multiply with the table lookup result. A SIMD loop already needs some vector constants so you'll probably touch memory anyway, and the table lookup can happen in parallel with the main loop.
8! is 40320, which fits in 16 bits, so a 1..8 lookup table only needs 8 * 2 bytes of storage. Or use 32-bit entries so you can use a memory source operand for imul instead of a separate movzx.
It doesn't make it worse. It runs faster for large numbers. Here are the results for factorial(1000000000):
-O2: 0.78 sec
-O3: 0.5 sec
Of course, using that large number is undefined behavior (because of overflow with signed arithmetic). But the timing is the same with unsigned numbers, for which it is not undefined behavior.
Note, this usage of factorial is usually pointless, as it doesn't calculate num!, but num! & UINT_MAX. But the compiler doesn't know about this.
Maybe with PGO, the compiler won't vectorize this code, if it is always called with small numbers.
If you don't like this behavior, but you want to use -O3, turn off autovectorization with -fno-tree-loop-vectorize.
I am making a bubble sort function in assembly code for my computer systems class.
My function looks like this
int sorter(int* list, int count, int opcode) {
__asm {
mov ebx, opcode ; opcode in ebx
push 0 ; outer count
mov ecx, 0 ; start ecx at 0 to start loop
mov esi, list ; set esi to list value/ starting address
jmp iloop ; jump to inner loop
oloop: //OUTER LOOP
push ecx ;
mov ecx, 0 ; put the 2nd value in the list in the inner loop count
iloop: // inner loop
mov edx, dword ptr[esi + 4 * ecx]; move first value in edx
mov eax, dword ptr[esi + 4 + 4 * ecx]; move next value in list to eax
cmp ebx, 2 ; compare opcode with 2
je dsnd ; if opcode is equal to 2 then we are using descending order
cmp eax, edx ; compare values at eax and edx
jg no_swap ; if value is greater eax(2nd value) then dont swap /ascending order
cont: //continue from descend
push edx ; push contents of edx to stack
pop dword ptr[esi + 4 + 4 * ecx]; pop the contents on stack to address of value in eax
push eax ; push value in eax to stack
pop dword ptr[esi + 4 * ecx]; pop value on stack to address of value previously in eax
no_swap: //no value swap
inc ecx ; increment inner count
cmp ecx, count ; compare inner loop count to acutal length
jne iloop ; if not equal jump back to inner loop
pop ecx ; get outer count
inc ecx ; to check for n-1 in outer loop
cmp ecx, count ; compare outer loop count to length
jne oloop ; if not equal jump back to outer loop
jmp done ;
dsnd:
cmp eax, edx ; compare values at eax and edx
jl no_swap ; if value is less then dont swap
jmp cont ; continue with loop
done:
}
Where opcode is either 1 for ascending sort or 2 for descending order,
list is a pointer to list of ints, and count is the number of ints in the list
For ascending sort, my program works fine, but with descending I have issues as shown in these test runs:
input 10 -20 5 12 30 -5 -22 55 52 0
Number of integer = 10
Ascend_or_Descend = 1
-22 -20 -5 0 5 10 12 30 52 55 # correct
input 48 -24 48 -24 10 100 -10 60 -256 10 -10 4096 -1024 60 10 -10
Number of integer = 16
Ascend_or_Descend = 1
-1024 -256 -24 -24 -10 -10 -10 10 10 10 48 48 60 60 100 4096 # correct
input 10 -20 5 12 30 -5 -22 55 52 0
Number of integer = 10
Ascend_or_Descend = 2
4283780 55 52 30 12 10 5 0 -5 -20 # incorrect
input 48 -24 48 -24 10 100 -10 60 -256 10 -10 4096 -1024 60 10 -10
Number of integer = 16
Ascend_or_Descend = 2
1500056 4096 100 60 60 48 48 10 10 10 -10 -10 -10 -24 -24 -256 # incorrect
It seems to take the lowest value and swaps it with an address. I am no expert with assembly.