How does Linux perf calculate the cache-references and cache-misses events - caching

I am confused by the perf events cache-misses and L1-icache-load-misses,L1-dcache-load-misses,LLC-load-misses. As when I tried to perf stat all of them, the answer doesn't seem consistent:
%$: sudo perf stat -B -e cache-references,cache-misses,cycles,instructions,branches,faults,migrations,L1-dcache-load-misses,L1-dcache-loads,L1-dcache-stores,L1-icache-load-misses,LLC-loads,LLC-load-misses,LLC-stores,LLC-store-misses,LLC-prefetches ./my_app
523,288,816 cache-references (22.89%)
205,331,370 cache-misses # 39.239 % of all cache refs (31.53%)
10,163,373,365 cycles (39.62%)
13,739,845,761 instructions # 1.35 insn per cycle (47.43%)
2,520,022,243 branches (54.90%)
20,341 faults
147 migrations
237,794,728 L1-dcache-load-misses # 6.80% of all L1-dcache hits (62.43%)
3,495,080,007 L1-dcache-loads (69.95%)
2,039,344,725 L1-dcache-stores (69.95%)
531,452,853 L1-icache-load-misses (70.11%)
77,062,627 LLC-loads (70.47%)
27,462,249 LLC-load-misses # 35.64% of all LL-cache hits (69.09%)
15,039,473 LLC-stores (15.15%)
3,829,429 LLC-store-misses (15.30%)
The L1-* and LLC-* events are easy to understand, as I can tell they are read from the hardware counters in CPU.
But how does perf calculate cache-misses event? From my understanding, if the cache-misses counts the number of memory accesses that cannot be served by the CPU cache, then shouldn't it be equal to LLC-loads-misses + LLC-store-misses? Clearly in my case, the cache-misses is much higher than the Last-Level-Cache-Misses number.
The same confusion goes to cache-reference. It is much lower than L1-dcache-loads and much higher then LLC-loads+LLC-stores
My Linux kernel and CPU info:
%$: uname -r
4.10.0-22-generic
%$: lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 158
Model name: Intel(R) Core(TM) i5-7600K CPU # 3.80GHz
Stepping: 9
CPU MHz: 885.754
CPU max MHz: 4200.0000
CPU min MHz: 800.0000
BogoMIPS: 7584.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 6144K
NUMA node0 CPU(s): 0-3
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp

The built-in perf events that you are interested in are mapping to the following hardware performance monitoring events on your processor:
523,288,816 cache-references (architectural event: LLC Reference)
205,331,370 cache-misses (architectural event: LLC Misses)
237,794,728 L1-dcache-load-misses L1D.REPLACEMENT
3,495,080,007 L1-dcache-loads MEM_INST_RETIRED.ALL_LOADS
2,039,344,725 L1-dcache-stores MEM_INST_RETIRED.ALL_STORES
531,452,853 L1-icache-load-misses ICACHE_64B.IFTAG_MISS
77,062,627 LLC-loads OFFCORE_RESPONSE (MSR bits 0, 16, 30-37)
27,462,249 LLC-load-misses OFFCORE_RESPONSE (MSR bits 0, 17, 26-29, 30-37)
15,039,473 LLC-stores OFFCORE_RESPONSE (MSR bits 1, 16, 30-37)
3,829,429 LLC-store-misses OFFCORE_RESPONSE (MSR bits 1, 17, 26-29, 30-37)
All of these events are documented in the Intel manual Volume 3. For more information on how to map perf events to native events, see: Hardware cache events and perf and How does perf use the offcore events?.
But how does perf calculate cache-misses event? From my understanding,
if the cache-misses counts the number of memory accesses that cannot
be served by the CPU cache, then shouldn't it be equal to
LLC-loads-misses + LLC-store-misses? Clearly in my case, the
cache-misses is much higher than the Last-Level-Cache-Misses number.
LLC-load-misses and LLC-store-misses count only cacheable data read requests and RFO requests, respectively, that miss in the L3 cache. LLC-load-misses also includes reads for page walking. Both exclude hardware and software prefetching. (The difference compared to Haswell is that some types of prefetch requests are counted.)
cache-misses also includes prefetch requests and code fetch requests that miss in the L3 cache. All of these events only count core-originating requests. They include requests from uops irrespective of whether end up retiring and irrespective of the source of the response. It's unclear to me how a prefetch promoted to demand is counted.
Overall, I think cache-misses is always larger than LLC-load-misses + LLC-store-misses and cache-references is always larger than LLC-loads + LLC-stores.
The same confusion goes to cache-reference. It is much lower than
L1-dcache-loads and much higher then LLC-loads+LLC-stores
It's only guaranteed that cache-reference is larger than cache-misses because the former counts requests irrespective of whether they miss the L3. It's normal for L1-dcache-loads to be larger than cache-reference because core-originated loads usually occur only when you have load instructions and because of the cache locality exhibited by many programs. But it's not necessarily always the case because of hardware prefetches.
The L1-* and LLC-* events are easy to understand, as I can tell they
are read from the hardware counters in CPU.
No, it's a trap. They are not easy to understand.

Related

Comparing application performance between CPU architectures

I have a Java Servlet based application running on Apache Tomcat on two different machines with similar hardware (RAM, SSD disk, network interface and bandwidth) but different CPU architectures:
x86_64
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6266C CPU # 3.00GHz
Stepping: 7
CPU MHz: 3000.000
BogoMIPS: 6000.00
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 30976K
NUMA node0 CPU(s): 0-7
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 arat avx512_vnni md_clear flush_l1d arch_capabilities
aarch64
Architecture: aarch64
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: 0x48
Model: 0
Stepping: 0x1
BogoMIPS: 200.00
L1d cache: 64K
L1i cache: 64K
L2 cache: 512K
L3 cache: 32768K
NUMA node0 CPU(s): 0-7
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm
I have experience profiling Java applications both for CPU and memory usage with tools like Yourkit, JProfiler and Async Profiler. And I think I've found all the obvious performance related problems in our application. Using Apache JMeter (5.3.0) I've created a test plan that simulates real case loading: 9000 virtual users navigate the application, with think time, ramp up time, etc. The JMeter reports for both machines look very similar - after all the tweaking and tuning I was able to reach 1200 requests per second with this JMeter plan. If I increase the number of virtual users or decrease the think time then JMeter starts reporting errors mostly related to timeouts (both connect and read timeouts).
So I've decided to use wrk. With it the client machine (the machine where the load test client runs at) uses much less resources and I was able to get much better throughput:
around 40000 req/s when executing against the x86_64 machine
around 20000 req/s when executing against the aarch64 machine
Now, my question is: How to find out what makes the x86_64 machine twice more performant than the aarch64 one ? What kind of tools would you use to find where is the difference ?
I've tried with perf tool but so far I cannot really grasp how to read and interpret its records.
One thing I know for sure is that it is not the network bandwidth because with iperf I can get 5.48 Gbits/sec, while wrk reaches at most 220 MBit/sec (according to nload). If I am not wrong this is around 5 times below the maximum throughput.
All machines run on Ubuntu 18.04.4
Looking into your own CPU information:
x64 -BogoMIPS: 6000.00
aarch64 - BogoMIPS: 200.00
And as per Wikipedia:
BogoMips (from "bogus" and MIPS) is a crude measurement of CPU speed
made by the Linux kernel when it boots to calibrate an internal
busy-loop.1 An often-quoted definition of the term is "the number of
million times per second a processor can do absolutely nothing"
It's related to the CPU frequency so my expectation is that the ARM processor actual frequency is much lower. You can use sar tool or JMeter PerfMon Plugin in order to check both systems metrics (CPU, RAM, Swap, etc.), this way you will be able to tell for sure what is the bottleneck when it comes to ARM system.
With regards to the tool selection, JMeter is more "heavy" than wrk, however it us more powerful as well due to support of Cookies, Cache, working with embedded resources (parsing the response and automatically downloading images, scripts, styles, etc.)

"openssl speed rsa" less performant on (normally) better cpu

I'm trying to figure ou why the "openssl speed rsa" gives me worse result on a better cpu
1st server: Linux Debian 8 (running a Xen) - kernel: 4.9.0-amd64
model name : Intel(R) Xeon(R) CPU E5-2650 v4 # 2.20GHz
cpu MHz : 2200.004
cache size : 30720 KB
flags : fpu de tsc msr pae mce cx8 apic sep mca cmov pat clflush mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl eagerfpu pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch fsgsbase bmi1 hle avx2 bmi2 erms rtm rdseed adx xsaveopt ibpb ibrs stibp
bogomips : 4400.00
2nd server: Linux Debian 8 (running a Vmware ESXi (I don't know which one yet) - kernel: 4.9.0-amd64)
model name : Intel(R) Xeon(R) CPU E5-2698 v4 # 2.20GHz
cpu MHz : 2199.058
cache size : 51200 KB
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc eagerfpu pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt aes xsave avx hypervisor lahf_lm kaiser arat
bogomips : 4399.99
Running a "openssl speed rsa" is giving me this (only pasting 4096bits because it's the only relevant for what I want to do):
1st server:
Doing 4096 bits private rsa's for 10s: **1699** 4096 bits private RSA's in 10.00s
Doing 4096 bits public rsa's for 10s: 105493 4096 bits public RSA's in 10.00s
2nd server:
Doing 4096 bits private rsa's for 10s: **1229** 4096 bits private RSA's in 10.00s
Doing 4096 bits public rsa's for 10s: 78677 4096 bits public RSA's in 10.00s
What could explain the difference of the keys created (=470 (1699-1229)) ?
Both servers have their cpu with the aes flag.
The only difference I see are the engine available, 1st server has
"(rdrand) Intel RDRAND engine" and the other not.
Any idea?
Edit:
As stated by #Alexei Khlebnikov, the openssl speed rsa command only measures the speed of the rsa sign/verify functions, and these don't use random numbers. Because of that, my original answer doesn't answer the question.
After a quick search, I found that the 1st server has bmi2 and adx instructions, while the 2nd server doesn't. These instructions are used to improve the performance of
Montgomery’s integer multiplication/squaring, that are used in the RSA signing operations. It's hard to confirm that's the reason for the performance difference, but it can be one of the reasons.
Original answer:
To generate RSA keys you need random and large prime numbers. The process to find a random and large prime number consists in:
Generate a random number;
Check if it's prime;
If it's not, repeat.
As you can see, this involves a lot of RNG, and generating good RNG is really slow. So, having a faster RNG means a faster RSA key generation.

MPICH2 on a machine with two NUMA nodes

I am new to MPI. I am using MPICH2 on a Linux machine with the following information:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 40
On-line CPU(s) list: 0-39
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Silver 4114 CPU # 2.20GHz
Stepping: 4
CPU MHz: 799.844
CPU max MHz: 3000.0000
CPU min MHz: 800.0000
BogoMIPS: 4400.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 14080K
NUMA node0 CPU(s): 0-9,20-29
NUMA node1 CPU(s): 10-19,30-39
My understanding is that I've got 2 nodes, 20 cores and 40 threads (i.e. processors) on this machine. Is this correct? If yes, I think I should set MPICH to spawn 20 processes (one process on each physical core), right? However, when I run the command mpiexec -n 20 MyProgram, the average CPU usage is only 50%. If I change to mpiexec -n 40 MyProgram, the CPU usage is 100% but the overall performance is actually becoming worse so I think I might be over-specifying.
CPU usage is a misleading metric. CPU usage reflects the portion of time some task was scheduled on a logical CPU. CPU average is just that, the average over all logical cores. So 50% CPU average can just mean that every other logical CPU has 100% usage, (and the others 0 %). So you observe this in a situation where each physical core is always utilized.
CPU usage, does mean resource utilization. There are workloads that benefit from using hyperthreading and workloads that don't. There are workloads that can be faster using less threads than physical cores (e.g. memory bandwidth limited). There are workloads that can be faster using more threads than logical CPUs (e.g. I/O latency limited).
Always use your performance metric (e.g. time) to figure out the best configuration. If you want to understand resource utilization you must look at many different performance metrics, cycles, instructions, memory bandwidth, cache, ....

Processors vs cores

Looking at my Linux machine (more /proc/cpuinfo), I see that I have 4 processors and each processor has 4 cores. Here's the full content of cpuinfo:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 60
model name : Intel(R) Xeon(R) CPU E3-1220 v3 # 3.10GHz
stepping : 3
microcode : 0x20
cpu MHz : 3464.984
cache size : 8192 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt dtherm ida arat pln pts
bugs :
bogomips : 6185.83
clflush size : 64
cache_alignment : 64
address sizes : 39 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 60
model name : Intel(R) Xeon(R) CPU E3-1220 v3 # 3.10GHz
stepping : 3
microcode : 0x20
cpu MHz : 3462.335
cache size : 8192 KB
physical id : 0
siblings : 4
core id : 1
cpu cores : 4
apicid : 2
initial apicid : 2
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt dtherm ida arat pln pts
bugs :
bogomips : 6189.39
clflush size : 64
cache_alignment : 64
address sizes : 39 bits physical, 48 bits virtual
power management:
processor : 2
vendor_id : GenuineIntel
cpu family : 6
model : 60
model name : Intel(R) Xeon(R) CPU E3-1220 v3 # 3.10GHz
stepping : 3
microcode : 0x20
cpu MHz : 3335.375
cache size : 8192 KB
physical id : 0
siblings : 4
core id : 2
cpu cores : 4
apicid : 4
initial apicid : 4
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt dtherm ida arat pln pts
bugs :
bogomips : 6189.68
clflush size : 64
cache_alignment : 64
address sizes : 39 bits physical, 48 bits virtual
power management:
processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 60
model name : Intel(R) Xeon(R) CPU E3-1220 v3 # 3.10GHz
stepping : 3
microcode : 0x20
cpu MHz : 3367.352
cache size : 8192 KB
physical id : 0
siblings : 4
core id : 3
cpu cores : 4
apicid : 6
initial apicid : 6
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt dtherm ida arat pln pts
bugs :
bogomips : 6189.74
clflush size : 64
cache_alignment : 64
address sizes : 39 bits physical, 48 bits virtual
power management:
I understand that a core is a processing unit, and that each processor contains 4 of these cores (for example, this question addresses that nicely), but I'm very confused regarding how they are actually used and accessed.
In all of the parallel processing tools I've used (mostly in R), I can access up to 4 processors/cores (called processors or cores depending on the language of the package). Also, in my system monitor, I see 4 processors and when one of them is active, it is fully active. I never see partial activity to suggest that I can get access to anything but one full processor (i.e., 4 cores within that processor). But, have 4 processors with 4 cores would suggest I can actually run 16 threads/processes at once.
What am I missing? Can I access each of these cores individually, or are the 4 cores of a given processor bound to that processor's activities?
EDIT: I just checked another machine in our server room and found the the snip posted above is repeated 40 times (!!!), so I have processors 0 through 39. Each of those has 10 cores listed. Shouldn't I be able to run 400 parallel jobs instead of 40?
EDIT2: Investigating the server closer I see 40 blocks of output for each processor. Rather than posting the whole thing, here's a shortened version with some explanation of what I see:
processor: 0 (this ranges from 0 to 40)
physical id: 0: (ranges from 0 to 1, with 20 of each - this is telling me 2 physical Xeon processors present)
siblings: 20 (same for all - a bit confused here as Xeon is a 10 core processor)
core id: (ranges from 0 to 12, but missing 5,6,7 - a total of 10 ids, but where are 5,6, and 7? Also, there are 4 of each core id - 4 0s, 4 1s, etc.)
cpu cores: 10 (all have the same here, which makes sense but seems inconsistent with siblings)
The model name for the processors on this server is Xeon CPU E5-2650 v3, if that helps. Here's the web page for it: https://ark.intel.com/products/81705/Intel-Xeon-Processor-E5-2650-v3-25M-Cache-2_30-GHz
Depending on which kernel you have, each separate core is identified as a processor.
This is probably the exact question you are looking for and it has a detailed explanation:
https://unix.stackexchange.com/questions/146051/number-of-processors-in-proc-cpuinfo
You have 1 processor with 4 cores.
Systems with more than 1 processor are very uncommon, except for supercomputers and such.

Why QueryPerformanceCounter and GetTickCount does not keep in pace?

The Problem is found under Windows XP when I want to write a function like GetTicketCount64 which does not exists on this platform. Here is my test code:
uint64_t GetTickCountEx()
{
#if _WIN32_WINNT > _WIN32_WINNT_WINXP
return GetTickCount64();
#else
// http://msdn.microsoft.com/en-us/library/windows/desktop/dn553408.aspx
LARGE_INTEGER Frequency = {};
LARGE_INTEGER Counter = {};
BOOST_VERIFY(QueryPerformanceFrequency(&Frequency));
BOOST_VERIFY(QueryPerformanceCounter(&Counter));
return 1000 * Counter.QuadPart / Frequency.QuadPart;
#endif
}
for (int i = 0; ++i < 1000; Sleep(30000))
{
const auto utc = time(nullptr); // System time
const auto xp = GetTickCount(); // API of Windows XP SP3
const auto ex = GetTickCountEx(); // Performance counter
const auto diff = ex - xp;
printf("%lld %I32u %I64u %I64u \n", utc, xp, ex, diff);
}
I cannot understand the below result. From this article, reply from Angstrom seems not correct. Last column suggests that the difference of GTC and GPC is closer as time goes by! ... and, will it reaches zero some hours later?
So, my question is: Is my implementation of GetTickCount64 correct, and why?
1401778679 503258484 503355416 96932
1401778709 503288484 503385374 96890
1401778739 503318484 503415354 96870
1401778769 503348484 503445289 96805
1401778799 503378484 503475274 96790
1401778829 503408484 503505272 96788
1401778859 503438484 503535245 96761
1401778889 503468500 503565210 96710
1401778919 503498500 503595143 96643
1401778949 503528500 503625137 96637
1401778979 503558500 503655100 96600
1401779009 503588500 503685069 96569
1401779039 503618500 503715069 96569
1401779069 503648500 503745006 96506
1401779099 503678500 503774951 96451
1401779129 503708500 503804958 96458
1401779159 503738500 503834943 96443
1401779189 503768500 503864911 96411
1401779219 503798500 503894792 96292
1401779249 503828500 503924759 96259
1401779279 503858500 503954607 96107
1401779309 503888500 503984607 96107
1401779339 503918500 504014392 95892
1401779369 503948500 504044362 95862
CPU Core Info from coreinfo.exe:
Coreinfo v3.21 - Dump information on system CPU and memory topology
Copyright (C) 2008-2013 Mark Russinovich
Sysinternals - www.sysinternals.com
Intel(R) Core(TM) i3 CPU M 380 # 2.53GHz
x86 Family 6 Model 37 Stepping 5, GenuineIntel
HTT * Hyperthreading enabled
HYPERVISOR - Hypervisor is present
VMX * Supports Intel hardware-assisted virtualization
SVM - Supports AMD hardware-assisted virtualization
EM64T * Supports 64-bit mode
SMX - Supports Intel trusted execution
SKINIT - Supports AMD SKINIT
NX * Supports no-execute page protection
SMEP - Supports Supervisor Mode Execution Prevention
SMAP - Supports Supervisor Mode Access Prevention
PAGE1GB - Supports 1 GB large pages
PAE * Supports > 32-bit physical addresses
PAT * Supports Page Attribute Table
PSE * Supports 4 MB pages
PSE36 * Supports > 32-bit address 4 MB pages
PGE * Supports global bit in page tables
SS * Supports bus snooping for cache operations
VME * Supports Virtual-8086 mode
RDWRFSGSBASE - Supports direct GS/FS base access
FPU * Implements i387 floating point instructions
MMX * Supports MMX instruction set
MMXEXT - Implements AMD MMX extensions
3DNOW - Supports 3DNow! instructions
3DNOWEXT - Supports 3DNow! extension instructions
SSE * Supports Streaming SIMD Extensions
SSE2 * Supports Streaming SIMD Extensions 2
SSE3 * Supports Streaming SIMD Extensions 3
SSSE3 * Supports Supplemental SIMD Extensions 3
SSE4a - Supports Sreaming SIMDR Extensions 4a
SSE4.1 * Supports Streaming SIMD Extensions 4.1
SSE4.2 * Supports Streaming SIMD Extensions 4.2
AES - Supports AES extensions
AVX - Supports AVX intruction extensions
FMA - Supports FMA extensions using YMM state
MSR * Implements RDMSR/WRMSR instructions
MTRR * Supports Memory Type Range Registers
XSAVE - Supports XSAVE/XRSTOR instructions
OSXSAVE - Supports XSETBV/XGETBV instructions
RDRAND - Supports RDRAND instruction
RDSEED - Supports RDSEED instruction
CMOV * Supports CMOVcc instruction
CLFSH * Supports CLFLUSH instruction
CX8 * Supports compare and exchange 8-byte instructions
CX16 * Supports CMPXCHG16B instruction
BMI1 - Supports bit manipulation extensions 1
BMI2 - Supports bit manipulation extensions 2
ADX - Supports ADCX/ADOX instructions
DCA - Supports prefetch from memory-mapped device
F16C - Supports half-precision instruction
FXSR * Supports FXSAVE/FXSTOR instructions
FFXSR - Supports optimized FXSAVE/FSRSTOR instruction
MONITOR * Supports MONITOR and MWAIT instructions
MOVBE - Supports MOVBE instruction
ERMSB - Supports Enhanced REP MOVSB/STOSB
PCLULDQ - Supports PCLMULDQ instruction
POPCNT * Supports POPCNT instruction
LZCNT - Supports LZCNT instruction
SEP * Supports fast system call instructions
LAHF-SAHF * Supports LAHF/SAHF instructions in 64-bit mode
HLE - Supports Hardware Lock Elision instructions
RTM - Supports Restricted Transactional Memory instructions
DE * Supports I/O breakpoints including CR4.DE
DTES64 * Can write history of 64-bit branch addresses
DS * Implements memory-resident debug buffer
DS-CPL * Supports Debug Store feature with CPL
PCID * Supports PCIDs and settable CR4.PCIDE
INVPCID - Supports INVPCID instruction
PDCM * Supports Performance Capabilities MSR
RDTSCP * Supports RDTSCP instruction
TSC * Supports RDTSC instruction
TSC-DEADLINE - Local APIC supports one-shot deadline timer
TSC-INVARIANT * TSC runs at constant rate
xTPR * Supports disabling task priority messages
EIST * Supports Enhanced Intel Speedstep
ACPI * Implements MSR for power management
TM * Implements thermal monitor circuitry
TM2 * Implements Thermal Monitor 2 control
APIC * Implements software-accessible local APIC
x2APIC - Supports x2APIC
CNXT-ID - L1 data cache mode adaptive or BIOS
MCE * Supports Machine Check, INT18 and CR4.MCE
MCA * Implements Machine Check Architecture
PBE * Supports use of FERR#/PBE# pin
PSN - Implements 96-bit processor serial number
PREFETCHW * Supports PREFETCHW instruction
Maximum implemented CPUID leaves: 0000000B (Basic), 80000008 (Extended).
Logical to Physical Processor Map:
*-*- Physical Processor 0 (Hyperthreaded)
-*-* Physical Processor 1 (Hyperthreaded)
Logical Processor to Socket Map:
**** Socket 0
Logical Processor to NUMA Node Map:
**** NUMA Node 0
Logical Processor to Cache Map:
*-*- Data Cache 0, Level 1, 32 KB, Assoc 8, LineSize 64
*-*- Instruction Cache 0, Level 1, 32 KB, Assoc 4, LineSize 64
*-*- Unified Cache 0, Level 2, 256 KB, Assoc 8, LineSize 64
-*-* Data Cache 1, Level 1, 32 KB, Assoc 8, LineSize 64
-*-* Instruction Cache 1, Level 1, 32 KB, Assoc 4, LineSize 64
-*-* Unified Cache 1, Level 2, 256 KB, Assoc 8, LineSize 64
**** Unified Cache 2, Level 3, 3 MB, Assoc 12, LineSize 64
You cannot compare the two timing sources, they have drastically different implementations in PCs.
GetTickCount() is derived from the clock tick interrupt, a signal that's generated by the real-time clock. Traditionally a dedicated chip, originally the Motorola MC146818, nowadays integrated in the south-bridge. It has the kind of oscillator that was used in watches, crystal stabilized and usually running at 32768 Hertz. This oscillator keeps running when the machine power is turned off, running off a lithium battery or a super-capacitor.
So resolution is quite poor, but it is made very accurate with very good long-term stability by periodically resynchronizing the clock with time provided by a time server, most Windows machines use time.windows.com. Review GetSystemTimeAdjustment() for details.
QueryPerformanceCounter() uses a frequency source available in the chipset. Traditionally the 8053 counter running at 1193182 Hertz. Nowadays the HPET timer, the HAL (Hardware Abstraction Layer) allows a system integrator to pick any frequency source he's got available. Using the CPU clock is not unusual in cheaper designs.
So resolution is very high, but it is inaccurate and there is no mechanism to calibrate this timer. Being off by 800 ppm from the reported QPF is not unusual. This timer should only ever be used for short interval measurements, the kind that a profiler would use for example.
So no, using QueryPerformanceCounter() as an alternative for GetTickCount64() isn't a very good idea, unless you can live with the inaccuracy. Technically you can synthesize your own 64-bit counter, as long as you keep track of the value of GetTickCount() overflowing. You could, say, increment the course count when the previous value was negative and the new value is positive, indicating that it overflowed. The only requirement is that you sample GetTickCount() often enough to see the transition, at least once in 24 days.
In the same thread as the one you linked, the reply from Raymond Chen specifically says that you should consider neither to be the time 'since' anything. Only consider time differences (intervals) to be relevant quantities. Therefore, what you should be testing is intervals, for instance before your loop the start value(s), and each time in your loop the elapsed time since the start of the loop.

Resources