Erlang virtual machine map to which kernel thread? - parallel-processing

In a multiple core system there are multiple scheduler to schedule the Erlang processes. one scheduler is mapped with one CPU. My doubt is: Erlang Virtual machine is also a process running on some kernel thread. then to which to CPU it got mapped? or it share all CPUs according to the availability. (OS provide the CPU time according to the availability)?

An Erlang Virtual Machine runs as a single OS process. Within that process, it runs multiple threads, one per scheduler (and possibly additional threads for asynchronous I/O etc.). By default, you get one scheduler thread per CPU core.
Erlang processes ("green threads") are executed by the scheduler threads, which do load balancing between them, so there could be a hundred thousand Erlang processes being executed by 4 scheduler threads (on a quad-core machine) within a single operating system process. Normally, the OS does the mapping of scheduler threads onto physical cores, but see also How, if at all, do Erlang Processes map to Kernel Threads?.

Related

OpenStaxk: use cpu accross multi servers

I have Blade Server and I want to know that how its possible to use cpu/ram between blades.
I want to have a machine with 32 physical cpu and I want all cpus work together.
Is it possible to share cpu between servers ?
No, it is not possible without explicit support from the software. You can't run single-thread program on several cpu cores; and you can ru multi thread program on different unconnected (not coherent) physical cpus.
Different blades are different servers, every one of them has own OS instance. They have no memory coherence, only network connection, so it is task of your software (and of its programmer) to split the task between several processes and connect them using network. In computer clusters there is MPI interface to make programming of such programs easier.
There were several project to emulate shared memory system (or single OS instance system) using cluster of PCs without coherent memory, but they are abandoned and/or too slow: Intel cluster openmp, https://en.wikipedia.org/wiki/Single_system_image (MOSIX/OpenMOSIX), ScaleMP, different software DSM (https://en.wikipedia.org/wiki/Distributed_shared_memory#Software_DSM_implementation)...

Scheduling unit on linux

I hear linux kernel see thread as kernel thread and process as group of thread which using same virtual memory space.
I know that on Window, scheduling unit is thread.
Is that mean window and linux kernel's scheduling unit is thread??
what is the minumum scheduling unit of linux?
Generally, the scheduling unit on Linux is referred to as a KSE, a "kernel scheduling entity". On modern Linux systems, each thread is a KSE.

Force windows onto one CPU, and then take over the rest

I've seen various RTOSes that have this strategy that they have windows boot on one or more CPUs and then run realtime programs on the rest of the CPUs. Any idea how this might be accomplished? Can I let the computer boot off two CPUs and then stop execution on the rest of the CPUs? What documentation should I start looking at? I have enough experience with the linux kernel that I might be able to figure out how to do it under linux, so if there's anything that maps onto linux well that you could describe it in terms of, that'd be fantastic.
You can boot Windows on fewer CPUs than available easily. Run msconfig.exe, go to the Boot tab, click the Advanced options... button, check the number of processors box and set the desired number (this is for Windows 7, the exact location for Vista and XP might differ slightly).
But that's just a solution to a very small part of the problem.
You will need to implement a special kernel-mode driver to start those other CPUs (Windows won't let you do that sort of thing from non-kernel-mode code). And you will need to implement a thread scheduler for those CPUs and a bunch of other low-level things... You might want to steal some physical memory (RAM) from Windows as well and implement a memory manager as well and those two may be a very involved thing.
What to read? The Intel/AMD CPU documentation (specifically the APIC part), the x86 Multiprocessor specification from Intel, books on Windows drivers, Windows Internals books, MSDN, etc.
You can't turn off Windows on one CPU and expect to run your program as usual because syscalls are serviced by the same CPU that the thread issuing the syscall is issued on. The syscall relies on kernel-mode accessible per-thread data to handle the syscalls, and hence any thread (usermode or kernel-mode) can only run when Windows has performed the per-core initialization of the CPU.
It seems likely that you're writing a super-double-mega-awesome app that really-definitely needs to run, like, super-fast and you want everyone else to get off the core, 'cos then, like, you'll be the totally fastest-est, but you're not really appreciating that if Windows isn't on your core, then you can't use ANY part of Windows on that core either.
If you really do want to do this, you'll have to run as a boot-driver. The boot-driver will be able to reserve one of the cores from being initialized during boot, preventing Windows from "seeing" that core. You can then manually construct your own thread of execution to run on that core, but you'll need to handle paging, memory allocation, scheduling, NUMA, NMI exceptions, page-faulting, and ACPI events yourself. You won't be able to call Windows from that core without bluescreening Windows. You'll be on your own.
What you probably want to do is to lock your thread to a single processor (via SetThreadAffinity) and then up the priority of your thread to the maximum value. When you do so, Windows is still running on your core to service things like pagefaults and hardware interrupts, but no lower priority user-mode thread will run on that core (they'll all move to other cores unless they are also locked to your processor).
I could not understand the question properly. But if you asking for scheduling process to cores then linux can accomplish this using set affinity. Follow this page :
http://www.kernel.org/doc/man-pages/online/pages/man2/sched_setaffinity.2.html

MS-Windows scheduler control (or otherwise) -- test application performance on slower CPU?

Is there some tool which allows one to control the MS-Windows (XP-SP3 32-bit in my case) scheduler, s.t. a target application (which I'd like to test), operates as if it is running on a slower CPU. Say my physical host is a 2.4GHzv Dual-Core, but I'd like the application to run as if, it is running on a 800MHz/1.0GHz CPU.
I am aware of some such programs which allowed old DOS games to run slower, but AFAIK, they take the approach of consuming CPU cycles to starve the application. I do not want such a thing, and also would like to have higher precision control on the clock.
I don't believe you'll find software that directly emulates the different CPUs. But something like ProcessLasso would let you control a programs CPU usage. Thus simulating, in a way, a slower clock speed.
I also found this blog entry with many other ways to throttle your CPU: Windows CPU throttling techniques
Additionally, if you have access to VMWare you could setup a resource pool with a limited CPU reservation.

How can a kernel be non preemptive and still have multiple control paths

In an operating systems course I took a while ago we were working on an old, non-preemptive kernel of Linux (2.4.X). However, we were told that there could be multiple control paths in the kernel simultaneously. Doesn't that contradict the non-preemptive nature of the kernel?
EDIT: I mean, there is no context switch inside the kernel. Last time I tried asking this question I got the response "well, the Linux kernel is preemptive, so there's no problem".
Within the 2.4 kernel, although kernel code could not be arbitrarily pre-empted by other kernel code, kernel code could still voluntarily give up the CPU by sleeping (this is obviously quite a common case).
In addition, kernel code could always be pre-empted by interrupt handlers (unless it specifically disabled interrupts), and the 2.4 kernel also supported SMP, allowing multiple CPUs to be executing within the kernel simultaneously.
The Linux kernel can run in interrupt context or in process (user) context. Process context means it is running on behalf of a process, which has called a syscall. Interrupt context means it is running on behalf of some kind of interrupt (hardware interrupt, softirq, ...).
When you talk about preemptive multitasking, it means the kernel can decide to preempt some task to run another task. When you talk about preemptive kernels, it means the kernel can decide to preempt itself running to run some other kernel code.
Now, before Linux was a preemptive kernel, you could run kernel code on several CPUs, and kernel code could be interrupted by hardware interrupts (which could end up running softirqs before returning,...). Preemptive kernels mean the kernel can also be preemptied by process context kernel code, to avoid long latencies (preemptive Linux came from the Linux realtime tree).
Of course, all of this is better explained in Rusty Russell's Unreliable Guide to Kernel Hacking and Unreliable Guide to Kernel Locking.
EDIT:
Or, trying to explain it better, when a task calls a syscall on a non-preemptive kernel, that task cannot be preemptied until the syscall ends (maybe with EINTR, but this could be a long time). A preemptive kernel allows that task to be preemptied, leading to lower average-case and worst-case latencies for other tasks waiting to run.
A non-preemptive kernel means that the kernel does not perform context switching on behalf of another process, or interrupt another running process. It can still be multi-processing by implementing cooperative multitasking where the actually running processes themselves yield control to the kernel or other processes. So yes, you can have multi-tasking and a non-preemptive kernel.
There is no context switching within the kernel for MONOLITHIC kernels, but of course there is still multitasking performed by the kernel....therefore you still have multi-tasking and non-premptiveness
The Linux kernel offloads a lot of work to kernel threads, which may be scheduled in and out alongside userspace tasks, independent of kernel preemption. Even your old 2.4 kernel has these kernel threads, albeit less of them than a modern 2.6 kernel. The 2.6 kernel now has several levels of preemption that can be chosen at compile time, but full preemption is not the default.

Resources