Does thread pinning in Linux throw OS scheduler off balance? - linux-kernel

If an application pin several high CPU consuming threads to a set of CPUs in Linux, does that throw OS scheduler off balance? Is OS scheduler aware of pinned threads? Do OS scheduler take into consideration of total CPU time per core so it will balance other non-pined threads more to other less "Hot" CPUs?

Related

Different execution speed with idle vs heavy-load CPU

Fellow colleagues,
I'm currently working on a PowerPC emulator written in C++. In order to evaluate its performance, I'm using std::chrono:high_resolution_clock to measure execution time of a guest code block for that the number of CPU cycles is known. The corresponding code is located here: https://github.com/maximumspatium/dingusppc/commit/11b4e99376e23f46f4cd8ee6223c5788ab963a37
While doing the above tests, I noticed that my MacBook Pro reports different numbers depending on CPU load. That is, when I run the above code with idle CPU I'm getting about 230000 ns execution time while with heavy-loaded CPU (neural net training, for example) I'm getting much better performance (< 70000 ns).
I suppose it's related to threads and scheduling in macOS. I'd like to utilize the full CPU power in my emulator. Is there a way to change thread performance to run at full speed, just like it does when the CPU is running under heavy load?
Thanks in advance!
P.S.: The machine in question is MacBook Pro 17´´ Mid 2010 with 2,53 GHz Intel Core i5 and 8GB RAM running MacOS 10.13.6.

High bandwidth Networking and the Windows "System Interrupts" Process

I am writing a massive UDP network application.
Running traffic at 10gigabits per second.
I have a very high "System Interrupts" CPU usage in task manager.
Reading about what this means, I see:
What Is the “System Interrupts” Process?
System Interrupts is an official part of Windows and, while it does
appear as a process in Task Manager, it’s not really a process in the
traditional sense. Rather, it’s an aggregate placeholder used to
display the system resources used by all the hardware interrupts
happening on your PC.
However most articles say that a high value corresponds with failing hardware.
However, since the "system interrupts" entry correlates to high IRQ usage, maybe this should be high considering my large UDP network usage.
Also, is all of this really happenning on one CPU core? Or is this an aggregate of all things happening across all CPU cores.
If you have many individual datagrams being sent over UDP, it's certainly going to cause a lot of hardware interrupts, and a lot of CPU usage. 10 Gb is certainly in the range of "lots of CPU" if your datagrams are relatively small.
Each CPU has its own hardware interrupts. You can see how spread out the load is over cores on the performance tab - the red line is the kernel CPU time, which includes hardware interrupts and other low-level socket handling by the OS.

Scheduling unit on linux

I hear linux kernel see thread as kernel thread and process as group of thread which using same virtual memory space.
I know that on Window, scheduling unit is thread.
Is that mean window and linux kernel's scheduling unit is thread??
what is the minumum scheduling unit of linux?
Generally, the scheduling unit on Linux is referred to as a KSE, a "kernel scheduling entity". On modern Linux systems, each thread is a KSE.

Erlang virtual machine map to which kernel thread?

In a multiple core system there are multiple scheduler to schedule the Erlang processes. one scheduler is mapped with one CPU. My doubt is: Erlang Virtual machine is also a process running on some kernel thread. then to which to CPU it got mapped? or it share all CPUs according to the availability. (OS provide the CPU time according to the availability)?
An Erlang Virtual Machine runs as a single OS process. Within that process, it runs multiple threads, one per scheduler (and possibly additional threads for asynchronous I/O etc.). By default, you get one scheduler thread per CPU core.
Erlang processes ("green threads") are executed by the scheduler threads, which do load balancing between them, so there could be a hundred thousand Erlang processes being executed by 4 scheduler threads (on a quad-core machine) within a single operating system process. Normally, the OS does the mapping of scheduler threads onto physical cores, but see also How, if at all, do Erlang Processes map to Kernel Threads?.

MS-Windows scheduler control (or otherwise) -- test application performance on slower CPU?

Is there some tool which allows one to control the MS-Windows (XP-SP3 32-bit in my case) scheduler, s.t. a target application (which I'd like to test), operates as if it is running on a slower CPU. Say my physical host is a 2.4GHzv Dual-Core, but I'd like the application to run as if, it is running on a 800MHz/1.0GHz CPU.
I am aware of some such programs which allowed old DOS games to run slower, but AFAIK, they take the approach of consuming CPU cycles to starve the application. I do not want such a thing, and also would like to have higher precision control on the clock.
I don't believe you'll find software that directly emulates the different CPUs. But something like ProcessLasso would let you control a programs CPU usage. Thus simulating, in a way, a slower clock speed.
I also found this blog entry with many other ways to throttle your CPU: Windows CPU throttling techniques
Additionally, if you have access to VMWare you could setup a resource pool with a limited CPU reservation.

Resources