Is Zircon still a microkernel? - linux-kernel

Link1 says that "Zircon is composed of a kernel (source in /zircon/kernel) as well as a small set of userspace services, drivers, and libraries",
but in earlier days, Link2 claims that "Zircon is composed of a microkernel as well as a small set of userspace services, drivers, and libraries".
I am confused that is Zircon still a microkernel?

Zircon is inspired by microkernel architecture and applies many of those concepts, but strictly speaking it does not strive to be minimal like other microkernel implementations. For this reason, Zircon does not self-identify as a microkernel.
Zircon's architecture aligns with microkernels in that core subsystems such as device drivers, file systems, user permissions, or the network stack exist outside the kernel as modular services in user space. However, microkernels often maintain a handful of minimal system calls (syscalls) covering memory/thread management and IPC. Zircon currently has over 150 syscalls covering a wider functional surface area.

Related

Why is the Windows NT kernel said to be a hybrid model?

According to Wikipedia, the Windows Kernel is a hybrid model, meaning it has both a monolithic and microkernel architecture.
But both definitions are very opposite: monolithic is that there is a shared place for both system services and core functionality, microkernel means there is not a shared place.
So, I bet that means that windows has shared space for some, and for other system services and core functionalities it is decoupled.
I'm trying my best to understand this but it's very cryptic for me, although I'm a professional software engineer.
Do you perhaps have an, maybe relatable, example in which it is monolithic and in which it is microkernel?
And to what extent is it similar to say Ubuntu and to what extent is it totally different from Ubuntu kernel, which is said to be fully monolithic?
Generally speaking, a microkernel has very few services provided by the kernel itself, which execute in kernel mode while a monolithic kernel has the vast majority of servers (especially drivers) running in kernel mode.
Many monolithic OSes are taking the approach of running some of their services and drivers at user level and this is what they mean by hybrid. They might keep the network drivers completely in the kernel but run GPU drivers at user level for example.

Possible to use OpenCL on multi-computers?

As far as I know, the answer is no. OpenCL is designed for multi-cores system.
But, is there any way to use OpenCL on multi-computers ( each computer is a multi-cores system ) ? If not, are any additional tools, frameworks... required?
I read some articles about Distributed computing, Cluster computing, Grid computing... but I can't find a satisfied answer
Any ideas will be appreciated
Thank you :)
There are two frameworks for this purpose: VirtualCL and CLara. Both packages let you work transparently with remote machines as local devices. Unfortunately, VirtualCL is only available as pre-compiled binaries without sources and CLara is not actively developed anymore.
SnuCL uses MPI and OpenCL to transparently use the cluster through the OpenCL API. It also adds a few OpenCL extensions to effectively deal with the memory objects.
It is open source. See http://aces.snu.ac.kr/Center_for_Manycore_Programming/SnuCL.html
and http://tbex.twbbs.org/~tbex/pad/SunCL.pdf
There is one more solution not mentioned above: dOpenCL.
"dOpenCL (distributed OpenCL) is a novel, uniform approach to programming distributed heterogeneous systems with accelerators. It transparently integrates the nodes of a distributed system into a single OpenCL platform. Thus, dOpenCL allows the user to run unmodified existing OpenCL applications in a heterogeneous distributed environment. Besides, it extends the OpenCL programming model to deal with individual nodes of the distributed system."
I have used VirtualCL to form a GPU cluster with 3 AMD GPU as compute node and my ubuntu intel desktop running as broker node. I was able to start both the broker and compute nodes.
In addition to the various options already mentioned by other posters, here are two more open source projects that you may be interested in:
ocland (in beta stage): offers a server application and an ICD implementation that the clients can use to take advantage of local and remote devices that support OpenCL in a transparent fashion. The license is GPLv3.
COPRTHR SDK by Brown Deer Technnology (currently version 1.6): this SDK which offers an open source (GPLv3) OpenCL implementation for x86_64, ARM, Epiphany and Intel MIC includes a "Compute Layer Remote Procedure Call" implementation. This consists of a client-side OpenCL implementation that supports rpc (libclrpc) and a server application (clrpcd). The website doesn't mention much about it but the documentation contains a section about this CLRPC implementation.

Feasibility of using the same code on both embedded and Windows platforms

We have a program written in VBA that is running on Windows machines.
We have a very similar program written in ANSI C, using a Keil IDE and compiler that is running on an STR9x uP.
Our plans were to rewrite the VBA code in .NET using C#.
What is the feasibility of writing the shared code in C++ to be used on both systems? Obviously, the .NET framework would be off limits, but that isn't much of a concern. I'm wondering, specifically, about how labor intensive you think the compilation process might be.
This is kind of a theoretical question, I know, but thanks for any thoughts.
I do this a as general practice. I think a better question than "is it possible" is "how should I structure my code to be able to run on both an embedded system and also a PC".
I prefer to write the code in C and structure each file as a c++ class using static variables to make global variables private to the module. Create getter and setter functions to access the private variables. Also use function pointers which I set at initialization of the module for the methods the module need to call outside of the module.
It is also easy to refactor from the above structured c code to a class in c# or c++.
You can also use C++ directly but using it incorrectly on an embedded system can cause problems.
You will need a hardware abstraction layer if you are accessing any hardware. I separate my code into two types the first being code that has no reference to what it is running on and other code which I refer to as drivers.
I use this code for reusing modules for things like communication protocols. But more importantly I use it for testing. I like to use gtest to unit test the modules. I can also rewrite the drivers and simulate the hardware on a PC to be able to run it on the PC.
Obviously, the .NET framework would be off limits
Not necessarily true. Given sufficient ROM and RAM resources (256K/64K respectively), the .NET Micro Framework will run on your device. However that is not necessarily a good reason to use it; there are already two other commonly used portable languages available for both your embedded target and Windows: C and C++. The target resource required for both C and C++ is minimal - C/C++ runtime start-up code can be well under 1K of code, almost all available resources can be utilised by your application code rather than the run-time environment.
The trick to utilising common code on both platforms is abstraction. This will involve at least hardware abstraction and possibly OS abstraction if your target is using any sort of kernel or scheduler such as an RTOS or thread library.
I'd recommend designing your embedded target with a layer architecture, having at least a device layer and an application layer and as mentioned already, possibly a system layer that deals with IPC, synchronisation and scheduling, if used. You may have other higher layer interfaces such as networking or filesystem that would equally benefit from abstraction. Note that standard APIs such as BSD sockets or stdio already count as abstraction, so if your target uses these, you have less work to do in Windows (minor differences between BSD Sockets and Winsock may still need some work)
The application layer will have no OS or hardware dependencies other than those accessible through the device and system layers. You must then implement the device and system layers on Windows as either a simulation or remapping to services or devices available on Windows. Some RTOS's already include Windows simulators for test and development, but defining your own OS API layer that you can port between a number of native RTOS and GPOS will allow your application code to be ported to different targets for both simulation and real-time execution very quickly.
Where the platform differences are minor and localised, and may not justify an abstraction layer, then target specific conditional compilation may be appropriate. Compilers support predefined macros for architecture, OS or compiler specific code that can be used for both this localised code and to make the abstraction layer code itself common where there is significant similarity.

Sandboxing vs. Virtualisation

Maybe I am missing something but isn't sandboxing and virtualisation exactly the same
concept, ie., separating the memory space for applications running in parallel. So I am wondering why they are having different names, are there maybe differences in the way
they are employed?
Many thanks,
Simon
These concepts address different problems: When we virtualize, we are hiding physical limitations of the machine. Sandboxing, on the other hand, sets artificial limits on access across a machine. Consider memory as a representative analogy.
Virtualization of memory is to allow every program to access every address in a 32- or 64-bit space, even when there isn't that much physical RAM.
Sandboxing of memory is to prevent one program from seeing another's data, even though they might occupy neigboring cells in memory.
The two concepts are certainly related in the common implementation of virtual memory. However, this is a convenient artifact of the implementation, since the hardware page table is only accessible by the kernel.
Consider how to implement them separately, on an x86 machine: You could isolate programs' memory using page tables without ever swapping to disk (sandboxing without virtualization). Alternatively, you could implement full virtual memory, but also give application-level access to the hardware page table so they could see whatever they wanted (virtualization without sandboxing).
There are actually 3 concepts that you are muddling up here. The first and foremost is what is provided by the OS and what it does is it separates the memory space for applications running in parallel. And it is called virtual memory.
In Virtual memory systems, the OS maps the memory address as seen by applications onto real physical memory. Thus memory space for applications can be separated so that they never collide.
The second is sandboxing. It is any technique you, the programmer, use to run untrusted code. If you, the programmer, are writing the OS then from your point of view the virtual memory system you are writing is a sandboxing mechanism. If you, the programmer, are writing a web browser then the virtual memory system, in itself, is not a sandboxing mechanism (different perspectives, you see). Instead it is a pontential mechanism for you to implement your sandbox for browser plug-ins. Google Chrome is an example of a program that uses the OS's virtual memory mechanism to implement its sandboxing mechanism.
But virtual memory is not the only way to implement sandboxing. The tcl programming language for example allows you to instantiate slave interpreters via the interp command. The slave interpreter is often used to implement a sandbox since it runs in a separate global space. From the OS's point of view the two interpreters run in the same memory space in a single process. But because, at the C level, the two interpreters never share data structures (unless explicitly programmed) they are effectively separated.
Now, the third concept is virtualization. Which is again separate from both virtual memory and sandboxing. Whereas virtual memory is a mechanism that, from the OS's perspective, sandboxes processes from each other, virtualisation is a mechanism that sandboxes operating systems from each other. Example of software that does this include: Vmware, Parallels Desktop, Xen and the kernel virtual machine.
Sandboxing means isolation only, when virtualization usually means simulating of some sort of hardware (virtual machine). Virtualization can happen with our without sandboxing.
Sandboxing is limiting access by a particular program. Virtualization is a mechanism that can be used to help do this but sandboxing is acheived with other mechanisms as well, and likewise virtualization has uses besides sandboxing. Sandboxing is a "what", virtualization is a "how".

Quick CPU ring mode protection question

I am very curious in messing up with HW. But my top level "messing" so far was linked or inline assembler in C program. If my understanding of CPU and ring mode is right, I cannot directly from user mode app access some low level CPU features, like disabling interrupts, or changing protected mode segments, so I must use system calls to do everything I want.
But, if I am right, drivers can run in ring mode 0. I actually don´t know much about drivers, but this is what I ask for. I just want to know, is learning how to write your own drivers and than call them the way I should go, to do what I wrote?
I know I could write whole new OS (at least to some point), but what I exactly want to do is acessing some low level features of HW from standart windows application. So, is driver the way to go?
Short answer: yes.
Long answer: Managing access to low-level hardware features is exactly the job of the OS kernel and if you only want access to a single feature there's no need to start your own OS from scratch. Most modern OSes, such as WIndows, Linux, or the BSDs, allow you to add code to the kernel through kernel modules.
When writing a kernel module (or device driver), you write code that is going to be executed inside the OS kernel and will thus be running in CPU ring 0. Great power comes with great responsibility, which in this case means that you should really know what you're doing as there will be no pre-configured OS interface to prevent you from doing the wrong things. You should therefore study the manuals of your hardware (e.g., Intel's x86 software developer's manuals, device specs, ...) as well as standard operating systems development literature (where you're also going to find plenty on the web -- OSDev, OSDever, OSR, Linux Device Drivers).
If you want to play with HW write some programs for 16-bit real-mode (or even with your own transition to protected-mode). There you have to deal with ASM, BIOS interrupts, segments, video memory and a lot of other low-level stuff.

Resources