While writing a driver, I came across a issue mentioned below.
Given a multithreaded application accessing the same device file through same FD. Consider that between the calls to OPEN and RELEASE, there are some resources (say mutex) held mutually by the thread-group. These resources are used during the READ/WRITE calls, and then eventually given up or destroyed during RELEASE.
If there is one thread accessing the resource during READ/WRITE and another thread simultaneously invokes the RELEASE by calling close, how is it assured by the VFS that the RELEASE is not called until there is at least one thread in the READ, WRITE, or like. What mechanism is handling this protection?
The kernel layer above the device drivers keeps track of how many references to an open file exist and does not call the release function until all of those references have been closed. This is somewhat documented in LDD3: http://tjworld.net/books/ldd3/#TheReleaseMethod
Related
The Windows Antimalware scan Interface (AMSI) contains abstractions which can be used to call the currently active virus scanner in Windows:
https://learn.microsoft.com/en-us/windows/desktop/amsi/antimalware-scan-interface-functions
There are 2 methods related to initialization:
AmsiInitialize
AmsiUninitialize
AmsiInitialize returns "A handle of type HAMSICONTEXT that must be passed to all subsequent calls to the AMSI API.".
After initialization is complete, I can use AmsiScanBuffer to scan a buffer for malware.
My question:
Can I use the same context concurrently from many threads in my application, or do I need to create one per thread from which I'm going to call the methods?
Reading the documentation, for AsmiUnitialize, it tells me that When the app is finished with the AMSI API it must call AmsiUninitialize.. This tells me that the context can be used for many calls, but it doesn't tell me anything about thread safety or concurrency.
Generally, API calls that are not specifically marked as thread-safe are not (this is usually true for any library). The easiest solution is to open an AMSI handle per thread.
(P.S. This only works with Windows Defender so far as I 've tested).
I have a multi process Windows application that uses SQLite locally on the disk.
When one process creates contention on the DB, other processes are starved since they poll for the DB and are not "notified" by the OS scheduler when no locks are being held (like mutexes do).
I found a built-in VFS called unix-namedsem for WXWorks, perhaps that's what I'm looking for - only for Windows.
Is there an existing solution for my problem? Or must I implement my own VFS?
The Windows VFS does not implement Mutex locking, and there is no other such VFS for Windows.
You have to implement that VFS yourself, or simply modify your own DB functions to lock a mutex around all transactions.
This is a hypothetical question. Suppose there is an application (which typically executes in user mode) that wants to access kernel data structures, read register values, and perform some kernel-level functions.
Is there a way for kernel and/or CPU to allow this application to perform its functions while maintaining the normal user-level/kernel-level isolation for other applications except this one?
In order to either put your app in kernel space (kernel memory) or to run it in ring 0 CPU mode, you will need to do that from kernel code. In normal state of operation you can't run app from the kernel with mentioned privileges (at least there is no existing API to do that). It's probably possible to implement some kernel code which is able of this. But it will be tricky and will mess up the whole concept of kernel-space/user-space separation, and if any advanced user-space API was used -- it won't work anyway.
If you are thinking about just giving your app ring 0 privileges -- it won't work either, because kernel has its own stack and because of kernel-space/user-space memory separation, so you won't be able to run internal kernel API.
Basically, you can achieve the same thing by writing kernel module instead. And for running some kernel code on behalf of user-space app -- you can use system calls interface.
So, answering your question: no, it's not possible to run user-space app in kernel mode so it can use internal kernel API.
Shared resource is used in two application process A and in process B. To avoid race condition, decided that when executing portion of code dealing with shared resource disable context switching and again enable process switching after exiting shared portion of process.
But don't know how to avoid process switching to another process, when executing shared resource part and again enable process switching after exiting shared portion of process.
Or is there any better method to avoid race condition?
Regards,
Learner
But don't know how to avoid process switching to another process, when executing shared resource part and again enable process switching after exiting shared portion of process.
You can't do this directly. You can do what you want with kernel help. For example, waiting on a Mutex, or one of the other ways to do IPC (interprocess communication).
If that's not "good enough", you could even make your own kernel driver that has the semantics you want. The kernel can move processes between "sleeping" and "running". But you should have good reasons why existing methods don't work before thinking about writing your own kernel driver.
Or is there any better method to avoid race condition?
Avoiding race conditions is all about trade-offs. The kernel has many different IPC methods, each with different characteristics. Get a good book on IPC, and look into how things like Postgres scale to many processors.
For all user space application, and vast majority of kernel code, it is valid that you can't disable context switching. The reason for this is that context switching is not responsibility of application, but operations system.
In scenario that you mentioned, you should use a mutex. All processes must follow convention that before accessing shared resource, they acquire mutex, and after they are done with accessing shared resource, they release the mutex.
Lets say an application accessing the shared resource acquired mutex, and is doing some processing of shared resource, and that operating system performed context switch, thus stopping the application from processing shared resource. OS can schedule other processes wanting to access shared resource, but they will be in waiting state, waiting for mutex to be released, and none of such processes will not do anything with shared resource. After certain number of context switches, OS will again schedule original application, that will continue processing of shared resource. this will continue until original application finally releases the mutex. And then, some other process will start accessing shared resource in orderly fashion, as designed.
If you want more authoritative and detailed explanations of whats and whys of similar scenarios, you can watch this MIT lesson, for example.
Hope this helps.
I would suggest looking into named semaphores. sem_overview (7). This will allow you to ensure mutual exclusion in your critcal sections.
My service needs to store a few bits of information (at minimum, at least 20 bits or so, but I can easily make use of more) such that
it persists across service restarts, even if the service crashed or was otherwise terminated abnormally
it does not persist across a reboot
can be read and updated with very little overhead
If I store this information in the registry or in a file, it will not get automatically emptied when the system reboots.
Now, if I were on a modern POSIX system, I would use shm_open, which would create a shared memory segment which persists across process restarts but not system reboots, and I could use shm_unlink to clean it up if the persistent data somehow got corrupted.
I found MSDN : Creating Named Shared Memory and started reimplementing pieces of it within my service; this basically uses CreateFileMapping(INVALID_HANDLE_NAME, ..., PAGE_READWRITE, ..., "Global\\my_service") instead of shm_open("/my_service", O_RDWR, O_CREAT).
However, I have a few concerns, especially centered around the lifetime of this pagefile-backed mapping. I haven't found answers to these questions in the MSDN documentation:
Does the mapping persist across reboots?
If not, does the mapping disappear when all open handles to it are closed?
If not, is there a way to remove or clear the mapping? Doesn't need to be while it's in use.
If it does persist across reboots, or does disappear when unreferenced, or is not able to be reset manually, this method is useless to me.
Can you verify or find faults in these points, and/or recommend a different approach?
If there were a directory that were guaranteed to be cleaned out upon reboot, I could save data in a temporary file there, but it still wouldn't be ideal: under certain system loads, we are encountering file open/write failures (rare, under 0.01% of the time, but still happening), and this functionality is to be used in the logging path. I would like not to introduce any more file operations here.
The shared memory mapping would not persist across reboots and it will disappear when all of its handles are closed. A memory mapping object is a kernel object - they always get deleted when the last reference to them goes away, either explicitly via a CloseHandle or when the process containing the reference exits.
Try creating a registry key with RegCreateKeyEx with REG_OPTION_VOLATILE - the data will not preserved when the corresponding hive is unloaded. This will be at system shutdown for HKLM or user logoff for HKCU.
sounds like maybe you want serialization instead of shared memory? If that is indeed appropriate for your application, the way you serialize will depend on your language. If you're using c++, check out boost::serialize. C# undoubtedly has lots of serializations options (like java), if that's what you're using.