What's the difference between kernel object and event object in Windows? - windows

As far as I know , both of them are pointed by a HANDLE which can be manipulated by user.
What is the difference?

Most of the APIs used to create, synchronize and monitor threads in a multi threaded application rely on kernel objects, which are also used to manage memory and files. KO are OS resources such as process, threads, events, mutex, semaphores, shared memory and files etc.
Except creating or opening a kernel object. you refer it by a HANDLE rather than name. A HANDLE is 32-bit value which uniquely identify the kernel object.
Kernel object is in general and Event is one of the specific kernel object.
Refer
Kernel Objects.
Events.

Related

is there a workqueue feature in xnu kernel?

I need to use workqueue-like feature on Mac OSX (kernel mode driver) and am looking for a way to add work into a queue to be processed by a kernel thread later. Conceptually this is the same thing as workqueue feature available in Linux kernel. Is there something similar on XNU kernel as well?
I don't think there's a direct equivalent as such, although I admit I'm not intimately familiar with the Linux side, so I'll avoid comparing and just tell you about what's available on macOS/xnu.
I/O Kit IOWorkLoops
If you're building an I/O Kit driver, and especially if you're writing a secondary interrupt handler, you'll be using IOWorkLoops. Interrupts are abstracted by IOEventSource objects, which schedule secondary interrupt handlers to run on the driver's IOWorkLoop.
Each IOWorkLoop wraps one kernel thread and also provides a serialisation/locking mechanism for resources shared with that thread. All jobs submitted to a workloop either explicitly through an IOCommandGate or the workloop object directly, or as a result of an IOEventSource event will be serialised. Note that IOCommandGate jobs will run synchronously on the calling thread, not the workloop thread.
As always with macOS/OSX internals, you will want to look at the header file comments and possibly the implementation in the xnu source for details. I personally find IOWorkLoops a bit clumsy for some tasks, but if you're dealing with PCI devices, etc. you don't really have a choice.
thread_call
A more lightweight background work mechanism is the thread_call API. It's defined in <kern/thread_call.h> and supports running functions on an OS-managed background thread, optionally after a delay or with a specific priority. This is probably closer to what you know from Linux, has a fairly straightforward API, but is not suitable for secondary interrupt handlers.

Is it possible to allow a particular user-level application to run in kernel-mode?

This is a hypothetical question. Suppose there is an application (which typically executes in user mode) that wants to access kernel data structures, read register values, and perform some kernel-level functions.
Is there a way for kernel and/or CPU to allow this application to perform its functions while maintaining the normal user-level/kernel-level isolation for other applications except this one?
In order to either put your app in kernel space (kernel memory) or to run it in ring 0 CPU mode, you will need to do that from kernel code. In normal state of operation you can't run app from the kernel with mentioned privileges (at least there is no existing API to do that). It's probably possible to implement some kernel code which is able of this. But it will be tricky and will mess up the whole concept of kernel-space/user-space separation, and if any advanced user-space API was used -- it won't work anyway.
If you are thinking about just giving your app ring 0 privileges -- it won't work either, because kernel has its own stack and because of kernel-space/user-space memory separation, so you won't be able to run internal kernel API.
Basically, you can achieve the same thing by writing kernel module instead. And for running some kernel code on behalf of user-space app -- you can use system calls interface.
So, answering your question: no, it's not possible to run user-space app in kernel mode so it can use internal kernel API.

Avoid Application[process] switching for shared resource in linux

Shared resource is used in two application process A and in process B. To avoid race condition, decided that when executing portion of code dealing with shared resource disable context switching and again enable process switching after exiting shared portion of process.
But don't know how to avoid process switching to another process, when executing shared resource part and again enable process switching after exiting shared portion of process.
Or is there any better method to avoid race condition?
Regards,
Learner
But don't know how to avoid process switching to another process, when executing shared resource part and again enable process switching after exiting shared portion of process.
You can't do this directly. You can do what you want with kernel help. For example, waiting on a Mutex, or one of the other ways to do IPC (interprocess communication).
If that's not "good enough", you could even make your own kernel driver that has the semantics you want. The kernel can move processes between "sleeping" and "running". But you should have good reasons why existing methods don't work before thinking about writing your own kernel driver.
Or is there any better method to avoid race condition?
Avoiding race conditions is all about trade-offs. The kernel has many different IPC methods, each with different characteristics. Get a good book on IPC, and look into how things like Postgres scale to many processors.
For all user space application, and vast majority of kernel code, it is valid that you can't disable context switching. The reason for this is that context switching is not responsibility of application, but operations system.
In scenario that you mentioned, you should use a mutex. All processes must follow convention that before accessing shared resource, they acquire mutex, and after they are done with accessing shared resource, they release the mutex.
Lets say an application accessing the shared resource acquired mutex, and is doing some processing of shared resource, and that operating system performed context switch, thus stopping the application from processing shared resource. OS can schedule other processes wanting to access shared resource, but they will be in waiting state, waiting for mutex to be released, and none of such processes will not do anything with shared resource. After certain number of context switches, OS will again schedule original application, that will continue processing of shared resource. this will continue until original application finally releases the mutex. And then, some other process will start accessing shared resource in orderly fashion, as designed.
If you want more authoritative and detailed explanations of whats and whys of similar scenarios, you can watch this MIT lesson, for example.
Hope this helps.
I would suggest looking into named semaphores. sem_overview (7). This will allow you to ensure mutual exclusion in your critcal sections.

READ/WRITE and RELEASE handling in Linux device driver against multithreaded application

While writing a driver, I came across a issue mentioned below.
Given a multithreaded application accessing the same device file through same FD. Consider that between the calls to OPEN and RELEASE, there are some resources (say mutex) held mutually by the thread-group. These resources are used during the READ/WRITE calls, and then eventually given up or destroyed during RELEASE.
If there is one thread accessing the resource during READ/WRITE and another thread simultaneously invokes the RELEASE by calling close, how is it assured by the VFS that the RELEASE is not called until there is at least one thread in the READ, WRITE, or like. What mechanism is handling this protection?
The kernel layer above the device drivers keeps track of how many references to an open file exist and does not call the release function until all of those references have been closed. This is somewhat documented in LDD3: http://tjworld.net/books/ldd3/#TheReleaseMethod

Windows kernel ReadProcessMemory() / WriteProcessMemory()?

It's simple and straightforward in user mode because of those APIs.
How do you read/write specified process's userspace memory from a windows kernel module?
driver target platform is windows xp/2003
Use NtWriteVirtualMemory / NtReadVirtualMemory to write to other processes - you will need to open a handle to the process first.
Note that if you're already in the process, you can just write directly - for example if you're responding to a DeviceIoControl request from a process you can write directly to user-mode addresses and they will be in the address space of the process that called you.
I'm also starting in the world of windows drivers and from what I've read XxxProcessMemory calls NtXxxVirtualMemory in ntdll (R3-UserMode).
The NtXxxVirtualMemory calls the ZwXxxVirtualMemory (R0-KernelMode) also in the ntdll.
I believe you should use the ZwXxxVirtualMemory.
In the krnel, ZwXxx routines are just wrappers around NtXxx ones, telling the kernel that the caller is a kernel mode component, rather than an user application. When a call comes from usermode, the kernel performs additional security checks.
So, use ZwXxx when in the kernel.
An alternative approach for reading/writing memory from/to another process is:
obtain address of its process object (PsLookupProcessByProcessId),
switch the current thread to its address space (KeStackAttachProcess),
perform the operation (read/write...),
switch the address space back (KeUnstackDetachProcess),
decrement reference count incremented by (1) (ObDereferenceObject).

Resources