Simultaneous multithreading processor - processor

I have some doubts and would appreciate if anyone can help me to understand.
Assuming that I have a processor with 8 cores; with 4 way simultaneous multithreading (SMT) provided for each core. Now, I learned that, in case of SMT, each core can issue multiple instructions from different threads or from a single thread. So, in that case, each core should be able to issue at most 4 (as it is 4 way SMT) instructions in every cycle. Hence as there are in total 8 cores in the chip, at every cycle it should be able to issue 8*4 = 32 instructions in ideal case if all the issue slots (i.e.4 for each core) are stall free.
Is there is anything wrong in my reasoning or understanding? I am not an expert! So, would like to discuss about it and learn more. :) Thanks in advance.

A n-way SMT processor can execute instructions from up to n threads. That does not imply any limit on how many instructions in total it can issue in each cycle. If you want to specify this limit you talk about n-way superscalar or n-way issue.
E.g. Intel's Core i7 is a 4-way superscalar and 2-way SMT processor.

Related

Performance of dependent pre/post-incremented memory accesses

My question primarily applies to firestorm/icestorm (because that's the hardware I have), but I am curious about what other representative arm cores do too. Arm has strange pre- and post-incremented addressing modes. If I have (for instance) two post-incremented loads from the same register, will the second depend on the first, or is the CPU smart enough to perform them in parallel?
AFAIK the exact behaviour of the M1 execution units is mainly undocumented. Still, there is certainly a dependency chain in this case. In fact, it would be very hard to break it and the design of modern processors make this even harder: the decoders, execution units, schedulers are distinct units and it would be insane to dynamically adapt the scheduling based on the instructions executed in parallel by execution units so to be able to break the chain in this particular case. Not to mention that instructions are pipelined and it generally takes few cycles for them to be committed. Furthermore, the time of the instructions is variable based on the fetched memory location. Finally, even this would be the case, the Firestorm documents does not mention such a feedback loop (see below for the links). Another possible solution for a processor to optimize such a pattern is to fuse the microinstructions so to combine the increment and add more parallelism but this is pretty complex to do for a relatively small improvement and there is no evidence showing Firestorm can do that so far (see here for more information about Firestorm instruction fusion/elimitation).
The M1 big cores (Apple's Firestorm) are designed to be massively parallel. They have 6 ALUs per core so they can execute a lot instructions in parallel on each core (possibly at the expense of a higher latency). However, this design tends to require a lot more transistors than current mainstream x86 Intel/AMD alternative (Alderlake/XX-Cove architecture put aside). Thus, the cores operate at a significantly lower frequency so to keep the energy consumption low. This means dependency chains are significantly more expensive on such an architecture compared to others unless there are enough independent instructions to be execute in parallel on the critical path. For more information about how CPUs works please thread Modern Microprocessors - A 90-Minute Guide!. For more information about the M1 processors and especially the Firestorm architecture, please read this deep analysis.
Note that Icestorm cores are designed to be energy efficient so they are far less parallel and thus having a dependency chain should be less critical on such a core. Still, having less dependency is often a good idea.
As for other ARM processors, recent core architecture are not as parallel as Firestorm. For example, the Cortex-A77 and Neoverse V1 have "only" 4 ALUs (which is already quite good). One need to also care about the latency of each instruction actually used in a given code. This information is available on the ARM website and AFAIK not yet published for Apple processors (one need to benchmark the instructions).
As for the pre VS post increment, I expect them to take the same time (same latency and throughput), especially on big cores like Firestorm (that try to reduce the latency of most frequent instruction at the expense of more transistors). However, the actual scheduling of the instruction for a given code can cause one to be slower than the other if the latency is not hidden by other instructions.
I received an answer to this on IRC: such usage will be fairly fast (makes sense when you consider it corresponds to typical looping patterns; good if the loop-carried dependency doesn't hurt too much), but it is still better to avoid it if possible, as it takes up rename bandwidth.

A multi-threaded software(PFC3D-to do simulation) not using all the available cores

I'm using a multi-threaded software(PFC3D developped by Itasca consulting) to do some simulations.After moving to a powerful computer Intel Xeon Gold 5120T CPU 2.2GHZ 2.19 GHZ (2 Processors)(28 physical cores, 56 logical cores)(Windows10) to have rapid calculations, the software seems to only use a limited number of cores.Normally the number of cores detected in the software is 56 and it takes automaticly the maximum number of cores.
I'm quite sure that the problem is in the system not in my software because I'm running the same code in a intel core i9-9880H Processor (16 logical cores) and it'is using all the cores with even more efficiency than the xeon gold.
The software is using 22 to 30;
28 cores/56 threads are displayed on task managers CPU page.I have windows 10 pro.
I appreciate very much your precious help.
Thank you
Youssef
interface
classes
details
code
It's hard to say because I do not have the code and you provide so little information.
You seems to have no IO because you said that you use 100% of the CPU on i9. That should simplify a little bit but...
There could be many reasons.
My feeling is that you have threading synchronisation (like critical section) that depends on shared ressource(s). That ressource seems to be lightly solicitated and thread require it just a little wich enable 16 threads to access it without too much collisions (or very little). I mean that thread do not have to wait for shared resource (it is mostly available / not locked). But adding more threads improve significantly collisions amount (locking state of shared ressources by another thread) to have to wait for that ressource. That really sounds like something like that. But it is only a guess.
A quick try that could potentially improve the performance (because I have the feeling that shared resource require very quick access), is to use SpinLock instead of regular Critical Section. But that is totally a guess based on very little and also SpinLock is available in C# but perhaps not in your language.
About the number of CPU taken, it could be normal to take only the half depending on how the program is made. Sometimes it could be better to not use hyperthreaded and perhaps your program is doing this itself. Also there could be a bug in either the program itself, in C# or in the BIOS which tell the app that there is only 28 cpus instead of 56 (usually due to hyperthreading). IT is still a guess.
There could be some additional information that could potentially help you in this stack overflow question.
Good luck.

Go counts virtual cores, not physical?

I have some Go code I am benchmarking on my Macbook (Intel Core i5 processor with two physical cores).
Go's runtime.NumCPU() yields 4, because it counts "virtual cores"
I don't know much about virtual cores in this context, but my benchmarks seems to indicate a multiprocessing speedup of only 2x when I configure my code using
runtime.GOMAXPROCS(runtime.NumCPU())
I get the same performance if I use 2 instead of 4 cores. I would post the code, but I think it's largely irrelevant to my questions, which are:
1) is this normal?
2) why, if it is, do multiple virtual cores benefit a machine like my macbook?
Update:
In case it matters, in my code, there are the same number of goroutines as whatever you set runtime.GOMAXPROCS() the tasks are fully parallel, have no interdependencies or shared state. its running as a native compiled binary.
1) is this normal?
If you mean the virtual cores showing up in runtime.NumCPU(), then yes, at least in the sense that programs written in C as well as those running on top of other runtimes like the JVM will see the same number of CPUs. If you mean the performance, see below.
2) why, if it is, do multiple virtual cores benefit a machine like my macbook?
It's a complicated matter that depends on the workload. The workloads where its benefits show the most are typically highly parallel like 3D rendering and certain kinds of data compression. In other workloads, the benefits may be absent and the impact of HT on performance may be negative (due to the communication and context-switching overhead of running more threads). Reading the Wikipedia article on hyper-threading can elucidate the matter further.
Here is a sample benchmark that compares the performance of the same CPU with and without HT. Note how the performance is not always improved by HT and in some cases, in fact, decreases.

What is difference between 'Cores across processors' and 'Number of CPUs'?

E.g. Consider following is processor configuration of my machine:
Intel(R) Core(TM)i5 CPU 650 #3.20GHz (4 CPUs)
Then how should i find out how many 'Cores across processors' My machine have?
Is it the 4 cores[i.e. Number of CPU]?
I have referred following links but still i does not get clear idea:
http://www.ehow.com/how_6873203_do-number-core-processors-windows_.html
Can anyone please clear my doubt?
Cores across processors means nothing, or at least, nothing in particular, it's a generic and non-technical assumption/phrase with no exact meaning or no meaning at all.
According to Intel this CPU provides 2 physical cores with Hyper Threading and this mean that you get 4 logical cores or so called threads.
Hyper Threading is an Intel Technology that for each core provides 2 threads, so 2*2 = 4 threads.
I think that this is the closest answer to what you are asking here.
Let's clarify first what is a CPU and what is a core, a central processing unit CPU, can have multiple core units, those cores are a processor by itself, capable of execute a program but it is self contained on the same chip.
In the past one CPU was distributed among quite a few chips, but as Moore's progressed they made to have a complete CPU inside one chip (die), since the 90's the manufacturer's started to fit more cores in the same die, so that's the concept of Multi-core.
In these days is possible to have hundreds of cores on the same CPU (chip or die) GPU's, Intel Xeon. Other technique developed no the 90's was simultaneous multi-threading, basically they found that was possible to have another thread in the same single core CPU, since most of the resources were duplicated already like ALU, multiple registers.
So basically a CPU can have multiple cores each of them capable to run one thread or more at the same time, we may expect to have more cores in the future, but with more difficulty to be able to program efficiently.

Cores and hyperthreading

I'm writing an extremely optimized and CPU-intensive multithreaded code in C which performs a task in a more or less limited time space. During this time it does not venture out of its L1 cache except to load initial values and to store final results. So essentially this is a parallelized code which scales linearly for every core added. This is what happens on non-HT cores.
On my 2-core i5 with HT (which the BIOS does not allow to be disabled - this is an impractical solution anyway) I get an annoyingly dismal improvement when going from one core to two. My hypothesis is that the first thread runs alone on a core and the second shares the core with the first.
There are functions in the Windows API to retrieve info about available cores and HTs. But how do I make use of this information to ensure that there is only one thread on one hyperthread per core?
This article might be able to help:
http://msdn.microsoft.com/en-us/magazine/cc300701.aspx#S11
See the "CPU Affinity" section and the "Detecting Hyper-Threading" section.
The OS will be using the HT logical cores whether or not you are, and the upshot of that is that the cache is effectively halved in size. You can pin a thread to a logical core, but I suspect it won't help you. Your problem is the mere presence of HT. You do need to turn it off.

Resources