Is CPU time relevant to Hyperthreading? - openmp

Is increased CPU time (as reported by time CLI command) indicative of inefficiency when hyperthreading is used (e.g. time spent in spinlocks or cache misses) or is it possible that the CPU time is inflated by the odd nature of HT? (e.g. real cores being busy and HT can't kick in)
I have quad-core i7, and I'm testing trivially-parallelizable part (image to palette remapping) of an OpenMP program — with no locks, no critical sections. All threads access a bit of read-only shared memory (look-up table), but write only to their own memory.
cores real CPU
1: 5.8 5.8
2: 3.7 5.9
3: 3.1 6.1
4: 2.9 6.8
5: 2.8 7.6
6: 2.7 8.2
7: 2.6 9.0
8: 2.5 9.7
I'm concerned that amount of CPU time used increases rapidly as number of cores exceeds 1 or 2.
I imagine that in an ideal scenario CPU time wouldn't increase much (same amount of work just gets distributed over multiple cores).
Does this mean there's 40% of overhead spent on parallelizing the program?

It's quite possibly an artefact of how CPU time is measured. A trivial example, if you run a 100 MHz CPU and a 3 GHz CPU for one second each, each will report that it ran for one second. The second CPU might do 30 times more work, but it takes one second.
With hyperthreading, a reasonable (not quite accurate) model would be that one core can run either one task at lets say 2000 MHz, or two tasks at lets say 1200 MHz. Running two tasks it does only 60% of the work per thread, but 120% of the work for both threads together, a 20% improvement. But if the OS asks how many seconds of CPU time was used, the first will report "1 second" after each second on real time, while the second will report "2 seconds".
So the reported CPU time goes up. If it less than doubles, overall performance is improved.

Quick question - are you running the genuine time program /usr/bin/time, or the built in bash command of the same name? I'm not sure that matters, they look very similar.
Looking at your table of numbers I sense that the processed data set (ie input plus all the out data) is reasonably large overall (bigger than L2 cache), and that the processing per data item is not that lengthy.
The numbers show a nearly linear improvement from 1 to 2 cores, but that is tailing off significantly by the time you're using 4 cores. The hyoerthreaded cores are adding virtually nothing. This means that something shared is being contended for. Your program has free running threads, so that thing can only be memory (L3 cache and main memory on the i7).
This sounds like a typical example of being I/O bound rather than compute bound, the I/O in this case being to/from L3 cache and main memory. L2 cache is 256k, so I'm guessing that the size of your input data plus one set of results and all intermediate arrays is bigger than 256k.
Am I near the mark?
Generally speaking when considering how many threads to use you have to take shared cache and memory speeds and data set sizes into account. That can be a right be a right bugger because you have to work it out at run time, which is a lot of programming effort (unless your hardware config is fixed).

Related

Type of CPU load to perform constant relaxed work

I'm trying to figure out how to program a certain type of load to the CPU that makes it work constantly but with average stress.
The only approach I know how to load a CPU with some work to do which wouldn't be at its maximum possible performance is when we alternate the part of giving the CPU something to do with sleep for some time. E.g. to achieve 20% CPU usage, do some computation which would take e.g. 0.2 seconds and then sleep for 0.8 seconds. Then the CPU usage will be roughly 20%.
However this essentially means the CPU will be jumping between max performance to idle all the time.
I wrote a small Python program where I'm making a process for each CPU core, set its affinity so each process runs on a designated core, and I'm giving it some absolutely meaningless load:
def actual_load_cycle():
x = list(range(10000))
del x
while repeating a call to this procedure in cycle and then sleeping for some time to ensure the working time is N% of total time:
while 1:
timer.mark_time(timer_marker)
for i in range(coef):
actual_load_cycle()
elapsed = timer.get_time_since(timer_marker)
# now we need to sleep for some time. The elapsed time is CPU_LOAD_TARGET% of 100%.
time_to_sleep = elapsed / CPU_LOAD_TARGET * (100 - CPU_LOAD_TARGET)
sleep(time_to_sleep)
It works well, giving the load within 7% of desired value of CPU_LOAD_TARGET - I don't need a precise amount of load.
But it sets the temperature of the CPU very high, with CPU_LOAD_TARGET=35 (real CPU usage reported by the system is around 40%) the CPU temps go up to 80 degrees.
Even with the minimal target like 5%, the temps are spiking, maybe just not as much - up to 72-73.
I believe the reason for this is that those 20% of time the CPU works as hard as it can, and it doesn't get cooler fast enough while sleeping afterwards.
But when I'm running a game, like Uncharted 4, the CPU usage as measured by MSI Afterburner is 42-47%, but the temperatures stay under 65 degrees.
How can I achieve similar results, how can I program such load to make CPU usage high but the work itself would be quite relaxed, as is done e.g. in the games?
Thanks!
The heat dissipation of a CPU is mainly dependent of its power consumption which is very dependent of the workload, and more precisely the instruction being executed and the number of active cores. Modern processors are very complex so it is very hard to predict the power consumption based on a given workload, especially when the executed code is a Python code executed in the CPython interpreter.
There are many factors that can impact the power consumption of a modern processors. The most important one is frequency scaling. Mainstream x86-64 processors can adapt the frequency of a core based on the kind of computation done (eg. use of wide SIMD floating-point vectors like the ZMM registers of AVX-512F VS scalar 64-bit integers), the number of active cores (the higher the number of core the lower the frequency), the current temperature of the core, the time executing instructions VS sleeping, etc. On modern processor, the memory hierarchy can take a significant amount of power so operations involving the memory controller and more generally the RAM can eat more power than the one operating on in-core registers. In fact, regarding the instructions actually executed, the processor needs to enable/disable some parts of its integrated circuit (eg. SIMD units, integrated GPU, etc.) and not all can be enabled at the same time due to TDP constraints (see Dark silicon). Floating-point SIMD instructions tend to eat more energy than integer SIMD instructions. Something even weirder: the consumption can actually be dependent of the input data since transistors may switch more frequently from one state to another with some data (researchers found this while computing matrix multiplication kernels on different kind of platforms with different kind of input). The power is automatically adapted by the processor since it would be insane (if even possible) for engineers to actually consider all possible cases and all possible dynamic workload.
One of the cheapest x86 instruction is NOP which basically mean "do nothing". That being said, the processor can run at the highest turbo frequency so to execute a loop of NOP resulting in a pretty high power consumption. In fact, some processor can run the NOP in parallel on multiple execution units of a given core keeping busy all the available ALUs. Funny point: running dependent instructions with a high latency might actually reduce the power consumption of the target processor.
The MWAIT/MONITOR instructions provide hints to allow the processor to enter an implementation-dependent optimized state. This includes a lower-power consumption possibly due to a lower frequency (eg. no turbo) and the use of sleep states. Basically, your processor can sleep for a very short time so to reduce its power consumption and then be able to use a high frequency for a longer time due to a lower power / heat-dissipation before. The behaviour is similar to humans: the deeper the sleep the faster the processor can be after that, but the deeper the sleep the longer the time to (completely) wake up. The bad news is that such instruction requires very-high privileges AFAIK so you basically cannot use them from a user-land code. AFAIK, there are instructions to do that in user-land like UMWAIT and UMONITOR but they are not yet implemented except maybe in very recent processors. For more information, please read this post.
In practice, the default CPython interpreter consumes a lot of power because it makes a lot of memory accesses (including indirection and atomic operations), does a lot of branches that needs to be predicted by the processor (which has special power-greedy units for that), performs a lot of dynamic jumps in a large code. The kind of pure-Python code executed does not reflect the actual instructions executed by the processor since most of the time will be spent in the interpreter itself. Thus, I think you need to use a lower-level language like C or C++ to better control kind of workload to be executed. Alternatively, you can use JIT compiler like Numba so to have a better control while still using a Python code (but not a pure-Python one anymore). Still, one should keep in mind that the JIT can generate many unwanted instructions that can result in an unexpectedly higher power consumption. Alternatively, a JIT compiler can optimize trivial codes like a sum from 1 to N (simplified as just a N*(N+1)/2 expression).
Here is an example of code:
import numba as nb
def test(n):
s = 1
for i in range(1, n):
s += i
s *= i
s &= 0xFF
return s
pythonTest = test
numbaTest = nb.njit('(int64,)')(test) # Compile the function
pythonTest(1_000_000_000) # takes about 108 seconds
numbaTest(1_000_000_000) # takes about 1 second
In this code, the Python function takes 108 times more time to execute than the Numba function on my machine (i5-9600KF processor) so one should expect a 108 higher energy needed to execute the Python version. However, in practice, this is even worse: the pure-Python function causes the target core to consume a much higher power (not just more energy) than the equivalent compiled Numba implementation on my machine. This can be clearly seen on the temperature monitor:
Base temperature when nothing is running: 39°C
Temperature during the execution of pythonTest: 55°C
Temperature during the execution of numbaTest: 46°C
Note that my processor was running at 4.4-4.5 GHz in all cases (due to the performance governor being chosen). The temperature is retrieved after 30 seconds in each cases and it is stable (due to the cooling system). The function are run in a while(True) loop during the benchmark.
Note that game often use multiple cores and they do a lot of synchronizations (at least to wait for the rendering part to be completed). A a result, the target processor can use a slightly lower turbo frequency (due to TDP constraints) and have a lower temperature due to the small sleeps (saving energy).

What is Causing Big Performance Differential Between Different Zen 1 Processors for a Simple Benchmark?

I am investigating the performance differentials of a very simple benchmark across three different Zen 1 processors, and I am observing massive differences in instruction per cycle and L2 cache misses between the Threadripper and the Epyc processors.
Question
On Epyc processors, the instruction per cycle (IPC) is much lower when the working set size is bigger than the last level cache (LLC).
Why can the performance be so different between the same generation Epyc and Threadripper processors?
Benchmark
https://github.com/llvm/llvm-test-suite/tree/master/MultiSource/Benchmarks/Olden/treeadd
The treeadd benchmark in the Olden suite. Source available as part of the LLVM test suite.
The benchmark recursively creates a full binary tree of 2^N - 1 nodes where N is the input, and performs the traversal 100 times, performing read-only operations on each node. The benchmark is compiled using clang from an llvm trunk fork that is a few month old.
The benchmark is a single-threaded application running on a single core.
Systems Under Test
System 1
CPU: Ryzen Threadripper 1950x (https://en.wikichip.org/wiki/amd/ryzen_threadripper/1950x)
Memory: 32GB DDR4 2666
OS: Ubuntu 20.04, kernel 5.4.0
System 2
CPU: Epyc 7601 (https://en.wikichip.org/wiki/amd/epyc/7601)
Memory: 128 GB DDR4 2133
OS: Ubuntu 20.04, kernel 5.8.0
System 3
CPU: Epyc 7371 (https://en.wikichip.org/wiki/amd/epyc/7371)
Memory: 128 GB DDR4 2133
OS: Ubuntu 20.04, kernel 5.4.48
Performance Summary
When running the benchmark with N = 22, the following table is obtained. (Wall Time, IPC and Cache References/Misses):
Two binaries are built, one on Threadripper (TR bin), the other on Epyc 7601 (Epyc bin).
Regardless of which binary is used, Epyc IPCs are much lower than those on Threadripper. The wall time is there for reference, but because the frequencies of the processors are different, they should probably not be directly compared. Using taskset to pin the benchmark on one core does not produce significantly different numbers.
By the looks of it, the IPC differential is caused by the dramatic differences in L2 misses
(perf event l2_cache_req_stat.ls_rd_blk_c). A usual suspect in this situation is that the sizes of L2 per core are different. However, according to the documentation, all the processors have the same 512KB L2 cache per core. Furthermore, I ran lmbench and obtained the read latencies on the three systems, shown in this figure below.
The two Threadripper charts are identical for ease of comparison vertically and horizontally. Note that the "memory wall" hit at about the same array sizes, further indicating that these processors have identically sized caches per core. On the other hand, the access latency for different strides hints at the possibility that Threadripper may have an unusually effective prefetcher. However, turning off the prefetcher on Threadripper only doubles the L2 miss rate (seen in the first figure), and it is still no where near the L2 miss rates on the Epyc machines.
Lastly, the working set size is varied to test the effect of cache sizes. This last figure shows the IPC when the working set sizes are varied. The benchmark is altered to run 1000 iterations to make sure it runs long enough to get stable numbers.
Although the Epyc machines seem to have 8MB LLC (per CCX of 4 cores), the performance falls off a cliff when the working set size goes from 3MB to 6MB.
Different Ways of Running the Benchmark
Other than simply running the benchmark, I tired two other ways.
Using taskset to pin the benchmark to a specific core.
using numactl to pin the benchmark to a specific core, and use the membind flag to use the memory closest to the core.
Neither produced significantly different profiles.
Guesses
Evidences indicate that the sizes of the caches on the processors are the same. Hence the massive difference in L2 miss rate may be due to cache partition of L3. Maybe Epyc in some situations limit the per core L3 to 4MB?
Summary
Significant IPC differences are observed for the same benchmark on Threadripper and Epyc machines. The difference seems to be caused by the unusually high L2 misses on the Epyc processors. However, the caches have the same sizes per core.
What could have caused the massive differences in L2 misses? Or maybe there is something else
inducing the IPC difference?

Should CPU time always be identical between executions of same code?

My understanding of CPU time is that it should always be the same between every execution, on a same machine. It should require an identical amount of cpu cycles every time.
But I'm running some tests now, of executing a basic echo "Hello World", and it's giving me 0.003 to 0.005 seconds.
Is my understanding of CPU time wrong, or there's an issue in my measurement?
Your understanding is completely wrong. Real-world computers running modern OSes on modern CPUs are not simple, theoretical abstractions. There are all kinds of factors that can affect how much CPU time code requires to execute.
Consider memory bandwidth. On a typical modern machine, all the tasks running on the machine's cores are competing for access to the system memory. If the code is running at the same time code on another core is using lots of memory bandwidth, that may result in accesses to RAM taking more clock cycles.
Many other resources are shared as well, such as caches. Say the code is frequently interrupted to let other code run on the core. That will mean that the code will frequently find the cache cold and take lots of cache misses. That will also result in the code taking more clock cycles.
Let's talk about page faults as well. The code itself may be in memory or it may not be when the code starts running. Even if the code is in memory, you may or may not take soft page faults (to update the operating system's tracking of what memory is being actively used) depending on when that page last took a soft page fault or how long ago it was loaded into RAM.
And your basic hello world program is doing I/O to the terminal. The time that takes can depend on what else is interacting with the terminal at the time.
The biggest effects on modern systems include:
virtual memory lazily paging in code and data from disk if it's not hot in pagecache. (First run of a program tends to have a lot more startup overhead.)
CPU frequency isn't fixed. (idle / turbo speeds. grep MHz /proc/cpuinfo).
CPU caches can be hot or not
(for very short intervals) an interrupt randomly happening or not in your timed region.
So even if cycles were fixed (which they very much are not), you wouldn't see equal times.
Your assumption is not totally wrong, but it only applies to core clock cycles for individual loops, and only to cases that don't involve any memory access. (e.g. data already hot in L1d cache, code already hot in L1i cache inside a CPU core). And assuming no interrupt happens while the timed loop is running.
Running a whole program is a much larger scale of operation and will involve shared resources (and possible contention for them) like access to main memory. And as #David pointed out, a write system call to print a string on a terminal emulator - that communication with another process can be slow and involves waking up another process, if your program ends up waiting for it. Redirecting to /dev/null or a regular file would remove that, or just closing stdout like ./hello >&- would make your write system call return -EBADF (on Linux).
Modern CPUs are very complex beasts. You presumably have an Intel or AMD x86-64 CPU with out-of-order execution, and a dozen or so buffers for incoming / outgoing cache lines, allowing it to track about that many outstanding cache misses (memory-level parallelism). And 2 levels of private cache per core, and a shared L3 cache. Good luck predicting an exact number of clock cycles for anything but the most controlled conditions.
But yes, if you do control the condition, the same small loop will typically run at the same number of core clock cycles per iteration.
However, even that's not always the case. I've seen cases where the same loop seems to have have two stable states for how the CPU schedules instructions. Different entry condition quirks can lead to an ongoing speed difference over millions of loop iterations.
I've seen this occasionally when microbenchmarking stuff on modern Intel CPUs like Sandybridge and Skylake. It's usually not clear exactly what the two stable states are, and what exactly is causing the bottleneck, even with the help of performance counters and https://agner.org/optimize
In one case I remember, an interrupt tended to get the loop into the efficient mode of execution. #BeeOnRope was measuring slow cycles/iteration using or RDPMC for a short interval (or maybe RDTSC with core clock fixed = TSC reference clocks), while I was measuring it running faster by using a really large repeat count and just using perf stat on the whole program (which was a static executable with just that one loop written by hand in asm). And #Bee was able to repro my results by increasing the iteration count so an interrupt would happen inside the timed region, and returning from the interrupt tended to get the CPU out of that non-optimal uop-scheduling pattern, whatever it was.

How to reduce time taken for large calculations in MATLAB

When using the desktop PC's in my university (Which have 4Gb of ram), calculations in Matlab are fairly speedy, but on my laptop (Which also has 4Gb of ram), the exact same calculations take ages. My laptop is much more modern so I assume it also has a similar clock speed to the desktops.
For example, I have written a program that calculates the solid angle subtended by 50 disks at 500 points. On the desktop PC's this calculation takes about 15 seconds, on my laptop it takes about 5 minutes.
Is there a way to reduce the time taken to perform these calculations? e.g, can I allocate more ram to MATLAB, or can I boot up my PC in a way that optimises it for using MATLAB? I'm thinking that if the processor on my laptop is also doing calculations to run other programs this will slow down the MATLAB calculations. I've closed all other applications, but I know theres probably a lot of stuff going on I can't see. Can I boot my laptop up in a way that will have less of these things going on in the background?
I can't modify the code to make it more efficient.
Thanks!
You might run some of my benchmarks which, along with example results, can be found via:
http://www.roylongbottom.org.uk/
The CPU core used at a particular point in time, is the same on Pentiums, Celerons, Core 2s, Xeons and others. Only differences are L2/L3 cache sizes and external memory bus speeds. So you can compare most results with similar vintage 2 GHz CPUs. Things to try, besides simple number crunching tests.
1 - Try memory test, such as my BusSpeed, to show that caches are being used and RAM not dead slow.
2 - Assuming Windows, check that the offending program is the one using most CPU time in Task Manager, also that with the program not running, that CPU utilisation is around zero.
3 - Check that CPU temperature is not too high, like with SpeedFan (free D/L).
4 - If disk light is flashing, too much RAM might be being used, with some being swapped in and out. Task Manager Performance would show this. Increasing RAM demands can be checked my some of my reliability tests.
There are many things that go into computing power besides RAM. You mention processor speed, but there is also number of cores, GPU capability and more. Programs like MATLAB are designed to take advantage of features like parallelism.
Summary: You can't compare only RAM between two machines and expect to know how they will perform with respect to one another.
Side note: 4 GB is not very much RAM for a modern laptop.
Firstly you should perform a CPU performance benchmark on both computers.
Modern operating systems usually apply the most aggressive power management schemes when it is run on laptop. This usually means turning off one or more cores, or setting them to a very low frequency. For example, a Quad-core CPU that normally runs at 2.0 GHz could be throttled down to 700 MHz on one CPU while the other three are basically put to sleep, while it is on battery. (Remark. Numbers are not taken from a real example.)
The OS manages the CPU frequency in a dynamic way, tweaking it on the order of seconds. You will need a software monitoring tool that actually asks for the CPU frequency every second (without doing busy work itself) in order to know if this is the case.
Plugging in the laptop will make the OS use a less aggressive power management scheme.
(If this is found to be unrelated to MATLAB, please "flag" this post and ask moderator to move this question to the SuperUser site.)

Why is parallel compilation performance with HT worse than without?

I've made several measurements of compilation time of wine with HyperThreading enabled and disabled in BIOS on my Core i7 930 #2.8GHz (quad-core) on Linux 2.6.39 x86_64. Each measurement was like this:
git clean -xdf
./configure --prefix=/usr
time make -j$N
where N is number from 1 to 8.
Here're the results ("speed" is 60/real from time(1)):
Here the blue line corresponds to HT disabled and purple one to HT enabled. It appears that when HT is enabled, using 1-4 threads is slower than without HT. I guess this might be related to the kernel not distributing the processes to different cores and reusing second threads of already busy cores.
So, my question: how can I force the kernel to give 1 process per core scheduling higher priority than adding more processes to the same core's different thread? Or, if my reasoning is wrong, how can I have performance with HT not worse than without HT for 1-4 processes running in parallel?
Hyper-threading on Intel chips is implemented as duplication of some of the elements of a pysical core but without enough electronics to be an independent core (e.g. they may share an instruction decoder but I cant recall the specifics of Intel's implementation).
Image a pysical core with HT as 1.5 physical cores that your OS sees as 2 real cores. This doesn't equate to 1.5x speed though (this can vary depending on use case)
In your example, non-HT is faster up to 4 threads because none of the cores are sharing work with their HT pipeline. You see a flatline above 4 threads because now you only have 4 execution threads and you get a little extra overhead context switching between threads.
In the HT example you are a bit slower up to 4 threads probably because some of those threads are being assigned to a real core and it's HT, so you are losing performance as those two execution threads share physical resources. Above 4 threads you are seeing the benefit of the extra execution threads, but you see the beginning of diminishing returns.
You could probably match performance on both cases for up to 4 threads, but likely not with a compilation job. To many processes being spawned for processor affinity to be setup I think. If you instead ran a real parallel job using OpenMP or MPI with X<=4 threads bound to the specific real CPU cores, I think you'd see similar performance between HT-off and -on.
Given a number of threads <= the number of real cores, using HT should be slower because (considered crudely) you are potentially cutting the speed of your cores in half.1
Keep in mind that generally more cores is NOT better than FASTER cores. In fact, the only reason so much work was put into developing multi-core systems is that it became increasingly difficult to make faster and faster ones. So if you cannot have a 20 Ghz processor, then 8 x 3 Ghz ones will have to do.
HT is, I believe, primarily intended as an advantage in contexts where each thread is not necessarily gobbling as much processor as it can; it's doing some particular task that's governed by interaction with a user, such as CAD stuff, video games, etc; these are the kind of applications that benefit from multi-tasking. By contrast, server platforms -- wherein the primary applications tend to thread independent tasks that are not governed by a dependence on anything else, hence are optimally run as fast as possible -- do not benefit directly from multi-tasking; they benefit from speed. make is in the same category, although with a perhaps greater degree of interdependence between threads, which is why you see an advantage for HT from 4-8 threads.
1. This is a simplification. HT doesn't simply double the number of cores and halve their speed, but whatever dynamic is used, the total number of processor cycles per second for the system is not improved. It's the same -- only more fragmented.

Resources