How can I check if Linux kernel is built with Real-Time Group Scheduling? - linux-kernel

I am using a Toradex Apalis iMX8 together with their Linux distribution TorizonCore, and need to check whether it is built with Real-Time Group scheduling. (https://docs.kernel.org/scheduler/sched-rt-group.html)
I have checked whether /proc/sys/kernel/sched_rt_period_us and /proc/sys/kernel/sched_rt_runtime_us exist, which they do, and are populated with values, which they are.
But I do not know if this is evidence enough that the kernel is built with Real-Time Group Scheduling.

Related

How to specify the physical CoreIDs used for "CLOSE" when specifying OMP_PROC_BIND?

We are trying to optimize HPC applications using OpenMP on a new hardware platform. These applications need precise placement/pinning of their cores or performance falls in half. Currently, we provide the user a custom GOMP_CPU_AFFINITY map for each platform, but this is cumbersome, because it's different on each hardware version, and even platforms with different firmware versions sometimes change their CoreID physical mappings - all things impossible for the user to detect on the fly.
It would be a great help if HPC applications could simply set GOMP_PROC_BIND to "close" and OpenMP would do the right thing for the given platform - but to make this possible, the hardware vendor would need to define what "close" means for each machine. We'd like to do this, but we can't tell how/where OpenMP gets CoreID lists to use for things like close, spread, etc. (For various external requirements, the CoreID spatial pattern on this machine would appear utterly random to a software writer.)
Any advice as to where/how OpenMP defines the CoreID lists for OMP_PROC_BIND so we could configure them? We are comfortable with the idea that we might need a custom version of OpenMP (with altered source code) for this platform if needed.
Thanks, everyone. :)
Jeff
Expanding on what #VictorEijkhout said...
You seem have invented an envirable that I can't find anywhere with Google (GOMP_PROC_BIND), with the OpenMP standard envirable (OMP_PROC_BIND). If GOMP_PROC_BIND exists the name suggests that it is a GNU feature. Note too that one of the two Google hits for GOMP_PROC_BIND says "Code that reads the setting is buggy. Setting is invalid and ignored at runtime." So, if you are setting that it is unsurprising that it has no effect!
I will therefore answer for the more general case of OMP_PROC_BIND.
The binding of OpenMP threads to logicalCPUs clearly has to be done at runtime, since, beyond its ISA, the compiler has no knowledge of the hardware on which the compiled code will run. Therefore you need to be looking at the runtime library code.
I have not looked at GNU's libgomp, but, where it can, LLVM's libomp uses the hwloc library to explore the machine hardware. Since hwloc also includes other useful tools for machine exploration (such as lstopo) it is likely that your effort is best invested in ensuring good hwloc support on your machine, at which point there will be no need to delve inside the OpenMP runtime.

How does a Linux distribution affect the kernel behavior

This might be obvious for some but not to me so I'll ask =)
I'm having an issue that I have build an embedded Linux stack for some piece of hardware (NVidia TX2 + ConnectTech Astro carrier). I use a PCIe card from EPIX
If I use Ubuntu's official distribution for tegra, the PCIe card is properly detected.
With identical kernel and device tree blob, and the same HW unit, the detection fails with embedded Linux.
I thought that detecting PCIe devices would be kernel's job and not be influence by the distro, unless the drivers are built as kernel modules and inserted at different times. But in my case they are build in kernel.
Could someone elaborate why the detection would work with one distro but not the order?
Here is a link to what I tried to do to fix the detection
tx2-pcie-does-not-detect-endpoint-on-connecttech-carrier-board
Thanks!
A Linux distribution contains a kernel that usually differs from the vanilla kernel of the same release. Most of the time a distribution kernel contains lots of back ports of bug fixes that were discovered and fixed later in micro releases. There may be other features that a specific vendor includes and the vanilla kernel does not, like more recent version of certain drivers, etc. What makes this even more confusing is that sets of these back ports are often different in distributions from different vendors. As a side effect, this makes it difficult to depend on something like KERNEL_VERSION() macro in custom kernel code or in custom device drivers.
I can't say about the specific issue that you're having. The topic is pretty generic, and I hope that this explanation helps.

how to run a openmp program on clusters with multiple nodes? [duplicate]

I want to know if it would be possible to run an OpenMP program on multiple hosts. So far I only heard of programs that can be executed on multiple thread but all within the same physical computer. Is it possible to execute a program on two (or more) clients? I don't want to use MPI.
Yes, it is possible to run OpenMP programs on a distributed system, but I doubt it is within the reach of every user around. ScaleMP offers vSMP - an expensive commercial hypervisor software that allows one to create a virtual NUMA machine on top of many networked hosts, then run a regular OS (Linux or Windows) inside this VM. It requires a fast network interconnect (e.g. InfiniBand) and dedicated hosts (since it runs as a hypervisor beneath the normal OS). We have an operational vSMP cluster here and it runs unmodified OpenMP applications, but performance is strongly dependent on data hierarchy and access patterns.
NICTA used to develop similar SSI hypervisor named vNUMA, but development also stopped. Besides their solution was IA64-specific (IA64 is Intel Itanium, not to be mistaken with Intel64, which is their current generation of x86 CPUs).
Intel used to develop Cluster OpenMP (ClOMP; not to be mistaken with the similarly named project to bring OpenMP support to Clang), but it was abandoned due to "general lack of interest among customers and fewer cases than expected where it showed a benefit" (from here). ClOMP was an Intel extension to OpenMP and it was built into the Intel compiler suite, e.g. you couldn't use it with GCC (this request to start ClOMP development for GCC went in the limbo). If you have access to old versions of Intel compilers (versions from 9.1 to 11.1), you would have to obtain a (trial) ClOMP license, which might be next to impossible given that the product is dead and old (trial) licenses have already expired. Then again, starting with version 12.0, Intel compilers no longer support ClOMP.
Other research projects exist (just search for "distributed shared memory"), but only vSMP (the ScaleMP solution) seems to be mature enough for production HPC environments (and it's priced accordingly). Seems like most efforts now go into development of co-array languages (Co-Array Fortran, Unified Parallel C, etc.) instead. I would suggest that you have a look at Berkeley UPC or invest some time in learning MPI as it is definitely not going away in the years to come.
Before, there was the Cluster OpenMP.
Cluster OpenMP, was an implementation of OpenMP that could make use of multiple SMP machines without resorting to MPI. This advance had the advantage of eliminating the need to write explicit messaging code, as well as not mixing programming paradigms. The shared memory in Cluster OpenMP was maintained across all machines through a distributed shared-memory subsystem. Cluster OpenMP is based on the relaxed memory consistency of OpenMP, allowing shared variables to be made consistent only when absolutely necessary. source
Performance Considerations for Cluster OpenMP
Some memory operations are much more expensive than others. To achieve good performance with Cluster OpenMP, the number of accesses to unprotected pages must be as high as possible, relative to the number of accesses to protected pages. This means that once a page is brought up-to-date on a given node, a large number of accesses should be made to it before the next synchronization. In order to accomplish this, a program should have as little synchronization as possible, and re-use the data on a given page as much as possible. This translates to avoiding fine-grained synchronization, such as atomic constructs or locks, and having high data locality source.
Another option for running OpenMP programs on multiple hosts is the remote offloading plugin in the LLVM OpenMP runtime.
https://openmp.llvm.org/design/Runtimes.html#remote-offloading-plugin
The big issue with running OpenMP programs on distributed memory is data movement. Coincidentally, that is also one of the major issues in programming GPU's. Extending OpenMP to handle GPU programming has given rise to OpenMP directives to describe data transfer. Programming GPU's has also forced programmers to think more carefully about building programs that consider data movement.

Possible to use OpenCL on multi-computers?

As far as I know, the answer is no. OpenCL is designed for multi-cores system.
But, is there any way to use OpenCL on multi-computers ( each computer is a multi-cores system ) ? If not, are any additional tools, frameworks... required?
I read some articles about Distributed computing, Cluster computing, Grid computing... but I can't find a satisfied answer
Any ideas will be appreciated
Thank you :)
There are two frameworks for this purpose: VirtualCL and CLara. Both packages let you work transparently with remote machines as local devices. Unfortunately, VirtualCL is only available as pre-compiled binaries without sources and CLara is not actively developed anymore.
SnuCL uses MPI and OpenCL to transparently use the cluster through the OpenCL API. It also adds a few OpenCL extensions to effectively deal with the memory objects.
It is open source. See http://aces.snu.ac.kr/Center_for_Manycore_Programming/SnuCL.html
and http://tbex.twbbs.org/~tbex/pad/SunCL.pdf
There is one more solution not mentioned above: dOpenCL.
"dOpenCL (distributed OpenCL) is a novel, uniform approach to programming distributed heterogeneous systems with accelerators. It transparently integrates the nodes of a distributed system into a single OpenCL platform. Thus, dOpenCL allows the user to run unmodified existing OpenCL applications in a heterogeneous distributed environment. Besides, it extends the OpenCL programming model to deal with individual nodes of the distributed system."
I have used VirtualCL to form a GPU cluster with 3 AMD GPU as compute node and my ubuntu intel desktop running as broker node. I was able to start both the broker and compute nodes.
In addition to the various options already mentioned by other posters, here are two more open source projects that you may be interested in:
ocland (in beta stage): offers a server application and an ICD implementation that the clients can use to take advantage of local and remote devices that support OpenCL in a transparent fashion. The license is GPLv3.
COPRTHR SDK by Brown Deer Technnology (currently version 1.6): this SDK which offers an open source (GPLv3) OpenCL implementation for x86_64, ARM, Epiphany and Intel MIC includes a "Compute Layer Remote Procedure Call" implementation. This consists of a client-side OpenCL implementation that supports rpc (libclrpc) and a server application (clrpcd). The website doesn't mention much about it but the documentation contains a section about this CLRPC implementation.

OpenCL distribution

I'm currently developing an OpenCL-application for a very heterogeneous set of computers (using JavaCL to be specific). In order to maximize performance I want to use a GPU if it's available otherwise I want to fall back to the CPU and use SIMD-instructions. My plan is to implement the OpenCL-code using vector-types because my understanding is that this allows CPUs to vectorize the instructions and use SIMD-instructions.
My question however is regarding which OpenCL-implementation to use. E.g. if the computer has a Nvidia GPU I assume it's best to use Nvidia's library but if no GPU is available I want to use Intel's library to use the SIMD-instructions.
How do I achieve this? Is this handled automatically or do I have to include all libraries and implement some logic to pick the right one? It feels like this is a problem that more people than I are facing.
Update
After testing the different OpenCL-drivers this is my experience so far:
Intel: crashed the JVM when JavaCL tried to call it. After a restart it didn't crash the JVM but it also didn't return any usable
devices (I was using an Intel I7-CPU). When I compiled the
OpenCL-code offline it seemed to be able to do some
auto-vectorization so Intel's compiler seems quite nice.
Nvidia: Refused to install their WHQL-drivers because it claimed I didn't have Nvidia-card (that computer has a Geforce GT 330M). When
I tried it on a different computer I managed to get all the way to
create a kernel but at the first execution it crashed the drivers
(the screen flickered for a while and Windows 7 said it had to
restart the drivers). The second execution caused a bluee-screen of
death.
AMD/ATI: Refused to install 32-bit SDK (I tried that since I will be using a 32-bit JVM) but 64-bit SDK worked well. This is the only
driver which I've managed to execute the code on (after a restart
because at first it gave a cryptic error-message when compiling).
However it doesn't seem to be able to do any implicit vectorization
and since I don't have any ATI GPU I didn't get any performance
increase compared to the Java-implementation. If I use vector-types I
might see some improvements though.
TL;DR None of the drivers seem ready for commercial use. I'm probably better of creating JNI-module with C-code compiled to use SSE-instructions.
First try to understand hosts & devices: http://www.streamcomputing.eu/blog/2011-07-14/basic-concept-hosts-and-devices/
Basically you can just do exactly what you described: check if a certain driver is available and if not, try the next one. What you choose first depends completely on your own preference. I would pick the device I have tested my kernel best on. In JavaCL you can pick the fastest device with JavaCL.createBestContext and CLPlatform.getBestDevice, check the host-code here: http://ochafik.com/blog/?p=501
Know NVidia does not support CPUs via their driver; only AMD and Intel do. Also is targeting multiple devices (say 2 GPUs and a CPU) a bit more difficult.
There is no API providing what you want. however, you can do the following:
i suggest you iterate over clGetPlatformIDs and query for the number of devices (clGetDeviceIDs), and device type for each device;
and pick the platform which has both types.
then build a map in u'r code, that maps for each type the list of platforms supporting it, ordered in some manner.
finally, just get the first item in the list corresponding for CL_DEVICE_TYPE_CPU and the first item corresponding for CL_DEVICE_TYPE_GPU.
if both returned results are equal (platform_cpu == platform_gpu) then pick one of them and use it for both.
if there is a platform supporting both, you will get match as before since you got order lists. then u can also do load balancing if u like on a single platform, like what Intel has.
Sorry for being late to the party, but regarding Intel's implementation behaviour under JavaCL, I'm afraid you've been bitten by a JavaCL bug :
https://github.com/ochafik/nativelibs4java/issues/297
Fixed in JavaCL 1.0.0-RC2 !
Cheers

Resources