Multicore CPU abilities - cpu

Is it possible for multicore processor to run tasks from 2 different processes simultaneously(at same time).

Yes, if the operating system supports it. The popular ones do and have done so for years.

Related

Any way to make program run only on a performance core (P-core)?

I need to run a compiler, and people have previously found that it runs well on a single core.
Now that Intel's 12th generation consumer chips have separate P-cores and E-cores, can I somehow tell the compile worker to run specifically on a P-core, so that it gets the fastest core on my machine?
Unless someone restricts the affinity mask on purpose, Windows is free to move threads to different CPUs as it sees fit. This most likely includes putting CPU intensive tasks on the faster cores.
This applies not just to different physical core types but also the older feature known as Core parking and motherboard layout on NUMA systems.
How Windows uses cores is probably influenced by the battery status and if features such as battery saver are enabled.

How to run multiple OS simultaneously on different cores of ARMv8

I have an ARM Cortex-A53 based embedded system which has 4 cores. It is not implemented with ARM TrustZone.
Is it possible to run the following OSs simultaneously?
Core0:Some type of RTOS
Core1:Some type of RTOS
Core2 and Core3: Linux
All of them use some shared memory space to exchange data.
Boot sequences until loading image(monolithic RTOS and Linux kernel) into DDR are processed by external chip.
Do I need to use a hypervisor, or just treat all cores as independent logical CPUs?
I am not familiar with ARMv8, should I pay additional attentions in setting MMU, GIC, etc. in my case?
That's a very-very vague question, so answer gonna be the same sort.
That's how ARMv8 looks like.
Is it possible to run the following OSs simultaneously?
Yes, there should not be restrictions for that.
All of them use some shared memory space to exchange data.
Yes, you could map same region of physical memory to all of them. How to sync access to that shared memory from different OSs (eg isolated from each other environments) is more important question though.
Boot sequences until loading image(monolithic RTOS and Linux kernel)
into DDR are processed by external chip.
For sure you should have an image of OS in memory before passing control to Kernel entry. So should be done from EL3 or EL2.
Do I need to use a hypervisor, or just treat all cores as independent
logical CPUs?
Yes, you do need hypervisor. That's probably the best way to organise interaction between different OSs.
should I pay additional attentions in setting MMU, GIC, etc. in my
case?
There are MMU for each EL. So MMU-EL0 are totally independent. MMU-EL1 (OS/Kernel) to organise interaction between App in same OS. MMU-EL2 (hypervisor) to organise interaction between different OS. But all in all probably not something special.
GIC, that's depends on how you are gonna organise interrupts. It's possible to route interrupts to all cores or only particular one. Use them to change EL and select which OS is gonna to handle it. So yes, GIC might need quite an attention.

how to run a openmp program on clusters with multiple nodes? [duplicate]

I want to know if it would be possible to run an OpenMP program on multiple hosts. So far I only heard of programs that can be executed on multiple thread but all within the same physical computer. Is it possible to execute a program on two (or more) clients? I don't want to use MPI.
Yes, it is possible to run OpenMP programs on a distributed system, but I doubt it is within the reach of every user around. ScaleMP offers vSMP - an expensive commercial hypervisor software that allows one to create a virtual NUMA machine on top of many networked hosts, then run a regular OS (Linux or Windows) inside this VM. It requires a fast network interconnect (e.g. InfiniBand) and dedicated hosts (since it runs as a hypervisor beneath the normal OS). We have an operational vSMP cluster here and it runs unmodified OpenMP applications, but performance is strongly dependent on data hierarchy and access patterns.
NICTA used to develop similar SSI hypervisor named vNUMA, but development also stopped. Besides their solution was IA64-specific (IA64 is Intel Itanium, not to be mistaken with Intel64, which is their current generation of x86 CPUs).
Intel used to develop Cluster OpenMP (ClOMP; not to be mistaken with the similarly named project to bring OpenMP support to Clang), but it was abandoned due to "general lack of interest among customers and fewer cases than expected where it showed a benefit" (from here). ClOMP was an Intel extension to OpenMP and it was built into the Intel compiler suite, e.g. you couldn't use it with GCC (this request to start ClOMP development for GCC went in the limbo). If you have access to old versions of Intel compilers (versions from 9.1 to 11.1), you would have to obtain a (trial) ClOMP license, which might be next to impossible given that the product is dead and old (trial) licenses have already expired. Then again, starting with version 12.0, Intel compilers no longer support ClOMP.
Other research projects exist (just search for "distributed shared memory"), but only vSMP (the ScaleMP solution) seems to be mature enough for production HPC environments (and it's priced accordingly). Seems like most efforts now go into development of co-array languages (Co-Array Fortran, Unified Parallel C, etc.) instead. I would suggest that you have a look at Berkeley UPC or invest some time in learning MPI as it is definitely not going away in the years to come.
Before, there was the Cluster OpenMP.
Cluster OpenMP, was an implementation of OpenMP that could make use of multiple SMP machines without resorting to MPI. This advance had the advantage of eliminating the need to write explicit messaging code, as well as not mixing programming paradigms. The shared memory in Cluster OpenMP was maintained across all machines through a distributed shared-memory subsystem. Cluster OpenMP is based on the relaxed memory consistency of OpenMP, allowing shared variables to be made consistent only when absolutely necessary. source
Performance Considerations for Cluster OpenMP
Some memory operations are much more expensive than others. To achieve good performance with Cluster OpenMP, the number of accesses to unprotected pages must be as high as possible, relative to the number of accesses to protected pages. This means that once a page is brought up-to-date on a given node, a large number of accesses should be made to it before the next synchronization. In order to accomplish this, a program should have as little synchronization as possible, and re-use the data on a given page as much as possible. This translates to avoiding fine-grained synchronization, such as atomic constructs or locks, and having high data locality source.
Another option for running OpenMP programs on multiple hosts is the remote offloading plugin in the LLVM OpenMP runtime.
https://openmp.llvm.org/design/Runtimes.html#remote-offloading-plugin
The big issue with running OpenMP programs on distributed memory is data movement. Coincidentally, that is also one of the major issues in programming GPU's. Extending OpenMP to handle GPU programming has given rise to OpenMP directives to describe data transfer. Programming GPU's has also forced programmers to think more carefully about building programs that consider data movement.

Difference between multi-process programming with fork and MPI

Is there a difference in performance or other between creating a multi-process program using the linux "fork" and the functions available in the MPI library?
Or is it just easier to do it in MPI because of the ready to use functions?
They don't solve the same problem. Note the difference between parallel programming and distributed-memory parallel programming.
Using the fork/join model you mentioned usually is for parallel programming on the same physical machine. You generally don't distribute your work to other connected machines (with the exceptions of some of the models in the comments).
MPI is for distributed-memory parallel programming. Instead of using a single processor, you use a group of machines (even hundreds of thousands of processors) to solve a problem. While these are sometimes considered one large logical machine, they are usually made up of lots of processors. The MPI functions are there to simplify communication between these processes on distributed machines to avoid having to do things like manually open TCP sockets between all of your processes.
So there's not really a way to compare their performance unless you're only running your MPI program on a single machine, which isn't really what it's designed to do. Yes, you can run MPI on a single machine and people do that all the time for small test codes or small projects, but that's not the biggest use case.

How could a Ruby process put limit to its CPU usage?

Let say I want a Ruby process to not use more than 15% of the CPU. Is it possible? How?
You could try using Process.setrlimit from the standard core:
Sets the resource limit of the process.
This looks like it is just a wrapper around the setrlimit from the C library so it may only be available on Unix-ish platforms. setrlimit doesn't support CPU percentage limits but it does support limiting CPU time in seconds.
If you're just trying to keep your Ruby process from hogging the whole CPU then you could try adjusting its priority with Process.setpriority which is just a wrapper around libc's setpriority and offers some control over your process's scheduling priority. Again, availability will probably be limited by your platform but it should work on Linux, OSX, or any other Unix-ish system.

Resources