Why is the Windows NT kernel said to be a hybrid model? - windows

According to Wikipedia, the Windows Kernel is a hybrid model, meaning it has both a monolithic and microkernel architecture.
But both definitions are very opposite: monolithic is that there is a shared place for both system services and core functionality, microkernel means there is not a shared place.
So, I bet that means that windows has shared space for some, and for other system services and core functionalities it is decoupled.
I'm trying my best to understand this but it's very cryptic for me, although I'm a professional software engineer.
Do you perhaps have an, maybe relatable, example in which it is monolithic and in which it is microkernel?
And to what extent is it similar to say Ubuntu and to what extent is it totally different from Ubuntu kernel, which is said to be fully monolithic?

Generally speaking, a microkernel has very few services provided by the kernel itself, which execute in kernel mode while a monolithic kernel has the vast majority of servers (especially drivers) running in kernel mode.
Many monolithic OSes are taking the approach of running some of their services and drivers at user level and this is what they mean by hybrid. They might keep the network drivers completely in the kernel but run GPU drivers at user level for example.

Related

Is Zircon still a microkernel?

Link1 says that "Zircon is composed of a kernel (source in /zircon/kernel) as well as a small set of userspace services, drivers, and libraries",
but in earlier days, Link2 claims that "Zircon is composed of a microkernel as well as a small set of userspace services, drivers, and libraries".
I am confused that is Zircon still a microkernel?
Zircon is inspired by microkernel architecture and applies many of those concepts, but strictly speaking it does not strive to be minimal like other microkernel implementations. For this reason, Zircon does not self-identify as a microkernel.
Zircon's architecture aligns with microkernels in that core subsystems such as device drivers, file systems, user permissions, or the network stack exist outside the kernel as modular services in user space. However, microkernels often maintain a handful of minimal system calls (syscalls) covering memory/thread management and IPC. Zircon currently has over 150 syscalls covering a wider functional surface area.

how can i access my system resources without the operating system intermediation?

i want to access my system resources such as CPU without the use of OS system calls.
is there any way to make this possible?
The only way to access the hardware directly on most modern operating systems, Linux and Windows included, is via kernel code. Linux Device Drivers is an excellent starting point for writing such code on Linux, even if it is a bit dated.
Otherwise, the OS provides various I/O facilities and controls the allocation of resources to the user applications, using the system call interface. The system call interface is omnipresent in its basic concept among all operating systems that actually have some sort of separation between kernel and user code. The use of software interrupts is the standard way to implement system calls on current hardware.
You need a system call to allocate the slightest amount of memory and even to read or write a single character. Not to mention that even a program that does absolutely nothing generally needs a few system calls just to be loaded.
You could gain more direct access to the hardware if you used DOS or an exokernel design.
But why would you want to do that anyway? Modern hardware is far from trivial to work with directly.

high performance runtime

It’s the first time I submit a question in this forum.
I’m posting a general question. I don’t have to develop an application for a specific purpose.
After a lot of “googling” I still haven’t found a language/runtime/script engine/virtual machine that match these 5 requirements:
memory allocation of variables/values or objects cleaned at run time
(e.g. a la C++ that use keyword delete or free in C )
language (and consequently the program) is a script or
pseudo-compiled a la byte code that should be portable on main
operating system (windows, linux, *bsd, solaris) & platform(32/64bit)
native use of multicore (engine/runtime)
no limit on the heap usage
library for network
The programming language for building application and that run on this engine is agnostic oriented (paradigm is not important).
I hope that this post won’t stir up a Holy-War but I'd like to put focus on engine behavior during program execution.
Sorry for my bad english.
Luke
I think Erlang might fit your requirement:
most data is either allocated in local scopes and therefore immediately deleted after use or contained in a library-powered permanent storage like ETS, DETS or Mnesia. There is Garbage Collection, though, but the paradigm of the language makes the need for it not as important.
the Erlang compiler compiles the source code to the BEAM virtual machine byte code, which, unlike Java is register-based and thus much faster. The VM is available for:
Solaris (including 64 bit)
BSD
Linux
OSX
TRU64
Windows NT/2000/2003/XP/Vista/7
VxWorks
Erlang has been designed for distributed systems, concurrency and reliability from day one
Erlang's Heap grows with your demand for it, it's initially limited and expanded automatically (there are numerous tweaks you can use to configure this on a per-VM-basis)
Erlang comes from a networking background and provides tons of libraries from IP to higher-level protocols

Feasibility of using the same code on both embedded and Windows platforms

We have a program written in VBA that is running on Windows machines.
We have a very similar program written in ANSI C, using a Keil IDE and compiler that is running on an STR9x uP.
Our plans were to rewrite the VBA code in .NET using C#.
What is the feasibility of writing the shared code in C++ to be used on both systems? Obviously, the .NET framework would be off limits, but that isn't much of a concern. I'm wondering, specifically, about how labor intensive you think the compilation process might be.
This is kind of a theoretical question, I know, but thanks for any thoughts.
I do this a as general practice. I think a better question than "is it possible" is "how should I structure my code to be able to run on both an embedded system and also a PC".
I prefer to write the code in C and structure each file as a c++ class using static variables to make global variables private to the module. Create getter and setter functions to access the private variables. Also use function pointers which I set at initialization of the module for the methods the module need to call outside of the module.
It is also easy to refactor from the above structured c code to a class in c# or c++.
You can also use C++ directly but using it incorrectly on an embedded system can cause problems.
You will need a hardware abstraction layer if you are accessing any hardware. I separate my code into two types the first being code that has no reference to what it is running on and other code which I refer to as drivers.
I use this code for reusing modules for things like communication protocols. But more importantly I use it for testing. I like to use gtest to unit test the modules. I can also rewrite the drivers and simulate the hardware on a PC to be able to run it on the PC.
Obviously, the .NET framework would be off limits
Not necessarily true. Given sufficient ROM and RAM resources (256K/64K respectively), the .NET Micro Framework will run on your device. However that is not necessarily a good reason to use it; there are already two other commonly used portable languages available for both your embedded target and Windows: C and C++. The target resource required for both C and C++ is minimal - C/C++ runtime start-up code can be well under 1K of code, almost all available resources can be utilised by your application code rather than the run-time environment.
The trick to utilising common code on both platforms is abstraction. This will involve at least hardware abstraction and possibly OS abstraction if your target is using any sort of kernel or scheduler such as an RTOS or thread library.
I'd recommend designing your embedded target with a layer architecture, having at least a device layer and an application layer and as mentioned already, possibly a system layer that deals with IPC, synchronisation and scheduling, if used. You may have other higher layer interfaces such as networking or filesystem that would equally benefit from abstraction. Note that standard APIs such as BSD sockets or stdio already count as abstraction, so if your target uses these, you have less work to do in Windows (minor differences between BSD Sockets and Winsock may still need some work)
The application layer will have no OS or hardware dependencies other than those accessible through the device and system layers. You must then implement the device and system layers on Windows as either a simulation or remapping to services or devices available on Windows. Some RTOS's already include Windows simulators for test and development, but defining your own OS API layer that you can port between a number of native RTOS and GPOS will allow your application code to be ported to different targets for both simulation and real-time execution very quickly.
Where the platform differences are minor and localised, and may not justify an abstraction layer, then target specific conditional compilation may be appropriate. Compilers support predefined macros for architecture, OS or compiler specific code that can be used for both this localised code and to make the abstraction layer code itself common where there is significant similarity.

Sandboxing vs. Virtualisation

Maybe I am missing something but isn't sandboxing and virtualisation exactly the same
concept, ie., separating the memory space for applications running in parallel. So I am wondering why they are having different names, are there maybe differences in the way
they are employed?
Many thanks,
Simon
These concepts address different problems: When we virtualize, we are hiding physical limitations of the machine. Sandboxing, on the other hand, sets artificial limits on access across a machine. Consider memory as a representative analogy.
Virtualization of memory is to allow every program to access every address in a 32- or 64-bit space, even when there isn't that much physical RAM.
Sandboxing of memory is to prevent one program from seeing another's data, even though they might occupy neigboring cells in memory.
The two concepts are certainly related in the common implementation of virtual memory. However, this is a convenient artifact of the implementation, since the hardware page table is only accessible by the kernel.
Consider how to implement them separately, on an x86 machine: You could isolate programs' memory using page tables without ever swapping to disk (sandboxing without virtualization). Alternatively, you could implement full virtual memory, but also give application-level access to the hardware page table so they could see whatever they wanted (virtualization without sandboxing).
There are actually 3 concepts that you are muddling up here. The first and foremost is what is provided by the OS and what it does is it separates the memory space for applications running in parallel. And it is called virtual memory.
In Virtual memory systems, the OS maps the memory address as seen by applications onto real physical memory. Thus memory space for applications can be separated so that they never collide.
The second is sandboxing. It is any technique you, the programmer, use to run untrusted code. If you, the programmer, are writing the OS then from your point of view the virtual memory system you are writing is a sandboxing mechanism. If you, the programmer, are writing a web browser then the virtual memory system, in itself, is not a sandboxing mechanism (different perspectives, you see). Instead it is a pontential mechanism for you to implement your sandbox for browser plug-ins. Google Chrome is an example of a program that uses the OS's virtual memory mechanism to implement its sandboxing mechanism.
But virtual memory is not the only way to implement sandboxing. The tcl programming language for example allows you to instantiate slave interpreters via the interp command. The slave interpreter is often used to implement a sandbox since it runs in a separate global space. From the OS's point of view the two interpreters run in the same memory space in a single process. But because, at the C level, the two interpreters never share data structures (unless explicitly programmed) they are effectively separated.
Now, the third concept is virtualization. Which is again separate from both virtual memory and sandboxing. Whereas virtual memory is a mechanism that, from the OS's perspective, sandboxes processes from each other, virtualisation is a mechanism that sandboxes operating systems from each other. Example of software that does this include: Vmware, Parallels Desktop, Xen and the kernel virtual machine.
Sandboxing means isolation only, when virtualization usually means simulating of some sort of hardware (virtual machine). Virtualization can happen with our without sandboxing.
Sandboxing is limiting access by a particular program. Virtualization is a mechanism that can be used to help do this but sandboxing is acheived with other mechanisms as well, and likewise virtualization has uses besides sandboxing. Sandboxing is a "what", virtualization is a "how".

Resources