Shared resource is used in two application process A and in process B. To avoid race condition, decided that when executing portion of code dealing with shared resource disable context switching and again enable process switching after exiting shared portion of process.
But don't know how to avoid process switching to another process, when executing shared resource part and again enable process switching after exiting shared portion of process.
Or is there any better method to avoid race condition?
Regards,
Learner
But don't know how to avoid process switching to another process, when executing shared resource part and again enable process switching after exiting shared portion of process.
You can't do this directly. You can do what you want with kernel help. For example, waiting on a Mutex, or one of the other ways to do IPC (interprocess communication).
If that's not "good enough", you could even make your own kernel driver that has the semantics you want. The kernel can move processes between "sleeping" and "running". But you should have good reasons why existing methods don't work before thinking about writing your own kernel driver.
Or is there any better method to avoid race condition?
Avoiding race conditions is all about trade-offs. The kernel has many different IPC methods, each with different characteristics. Get a good book on IPC, and look into how things like Postgres scale to many processors.
For all user space application, and vast majority of kernel code, it is valid that you can't disable context switching. The reason for this is that context switching is not responsibility of application, but operations system.
In scenario that you mentioned, you should use a mutex. All processes must follow convention that before accessing shared resource, they acquire mutex, and after they are done with accessing shared resource, they release the mutex.
Lets say an application accessing the shared resource acquired mutex, and is doing some processing of shared resource, and that operating system performed context switch, thus stopping the application from processing shared resource. OS can schedule other processes wanting to access shared resource, but they will be in waiting state, waiting for mutex to be released, and none of such processes will not do anything with shared resource. After certain number of context switches, OS will again schedule original application, that will continue processing of shared resource. this will continue until original application finally releases the mutex. And then, some other process will start accessing shared resource in orderly fashion, as designed.
If you want more authoritative and detailed explanations of whats and whys of similar scenarios, you can watch this MIT lesson, for example.
Hope this helps.
I would suggest looking into named semaphores. sem_overview (7). This will allow you to ensure mutual exclusion in your critcal sections.
Related
I work on module for ipsec in linux. Look at two different situations when code from my module will be executed.
Executing from process context: application generate some traffic to transmit via network, application should call some syscall to transfer data, then process switch to kernel space and packet go through network subsystem of linux, somewere here will be executed my module, and all finished after affording task to network card. All these steps performed from process context and in any moment scheduler can switch process from one to another. Is as follows fist case of using my module - from process context.
Executing from softirq context: when network card receive packet it generate hardware interrupt, which "prepare" appropriate softirq to run. And packet go through network subsystem of linux (including my module) until some application got it. These steps performed from softirq context and could be interrupted only by hardware interrupt, but not by scheduler work.
The question is: How can I programmatically determine in module, from which context module is executing? It can be some element of struct task_struct or some syscall or something else. I couldn't find it by myself.
It is considered as a bad practice to make a function's control flow dependent from whether it is executed in interrupt context or not.
Citation from the Linux kernel developer (Andrew Morton):
The consistent pattern we use in the kernel is that callers keep track of whether they are running in a schedulable context and, if necessary, they will inform callees about that. Callees don't work it out for themselves.
However, there are several functions(macros) defined in linux/preempt.h for detect current scheduling context: in_atomic(), in_interrupt(). But see that LWN article about their usage.
While writing a driver, I came across a issue mentioned below.
Given a multithreaded application accessing the same device file through same FD. Consider that between the calls to OPEN and RELEASE, there are some resources (say mutex) held mutually by the thread-group. These resources are used during the READ/WRITE calls, and then eventually given up or destroyed during RELEASE.
If there is one thread accessing the resource during READ/WRITE and another thread simultaneously invokes the RELEASE by calling close, how is it assured by the VFS that the RELEASE is not called until there is at least one thread in the READ, WRITE, or like. What mechanism is handling this protection?
The kernel layer above the device drivers keeps track of how many references to an open file exist and does not call the release function until all of those references have been closed. This is somewhat documented in LDD3: http://tjworld.net/books/ldd3/#TheReleaseMethod
Before the application terminates its
execution, COM must be shut down
again. (Failure to shut down COM could
result in execution errors when
another program attempts to use COM
services .)
The above quote implies that, right?
No it doesn't.
If you fail to properly release all references to an out of process COM server and correctly close down COM it could lead to that instance of that service being in an odd state (everything should be OK after releasing all references, but sometimes COM might cache part of the out of process marshalling layer).
An out of process COM service can be designed to have separate component instances for each client (within or across services) that are completely independent (even if hosted in the same process), in which case it is hard to see how a failure of one client would affect other instances (other than wasting memory on instances until COM finally times them out). If the instances share state they can of course interfere even if the clients operate perfectly to the rules.
It is rather important that you quote the source of that quote so we can get the context. As near as I can see, you got that from a book about DirectShow programming. What it actually refers to is the need to call CoUninitialize().
Yes, that's kinda important. A thread should call CoInitializeEx() to initialize the COM infrastructure before it starts using any of the COM API functions. You really should call CoUninitialize() when that threads ends so stuff is properly cleaned up. Typically at the end of your program's main() function. Failure to do so may make another app fail when it finds a register class factory that in fact is dead.
This otherwise has nothing to do with a COM out-of-process server having to restrict itself in any way. You specify sharing mode with the REGCLS argument to CoRegisterClassObject(). Of course, a server should not exit and call CoUninitialize until all its objects are released.
Here: http://download.oracle.com/docs/html/A95907_01/diff_uni.htm#1077398
I found that on Windows Oracle is thread based, while on Unix this is process based. Why it is like that?
What's more, there are many Oracle processes http://www.adp-gmbh.ch/ora/concepts/processes/index.html regardless the system.
Why log writer and db writer are implemented as processes... and the query execution is done using threads (windows) or processes (unix).
Oracle makes use of a SGA shared memory area to store information that is (and has to be) accessible to all sessions/transactions. For example, when a row is locked, that lock is in memory (as an attribute of the row) and all the other transactions need to see it is locked.
In windows a thread cannot access another process's memory
threads cannot access memory that
belongs to another process, which
protects a process from being
corrupted by another process.
As such, in Windows Oracle must be a single process with multiple threads.
On OS's supporting the sharing of memory between processes then it is less work for Oracle to work as a multi-process architecture and leave the process management to the OS.
Oracle runs a number of background threads/processes to do work that is (or can be) asynchronous to the other processes. That way those can continue even when other processes/threads are blocked or busy.
See this answer I posted earlier on in similar vein to this question 'What is process and thread?'. Windows makes extensive use of threads in this fashion. Unlike *nix/Linux based systems which are based on threads. And see here also, this link is a direct link(which is embedded in the first link I have given) to the explanation I gave on how Linux time divisions threads and processes.
Hope this helps,
Best regards,
Tom.
My service needs to store a few bits of information (at minimum, at least 20 bits or so, but I can easily make use of more) such that
it persists across service restarts, even if the service crashed or was otherwise terminated abnormally
it does not persist across a reboot
can be read and updated with very little overhead
If I store this information in the registry or in a file, it will not get automatically emptied when the system reboots.
Now, if I were on a modern POSIX system, I would use shm_open, which would create a shared memory segment which persists across process restarts but not system reboots, and I could use shm_unlink to clean it up if the persistent data somehow got corrupted.
I found MSDN : Creating Named Shared Memory and started reimplementing pieces of it within my service; this basically uses CreateFileMapping(INVALID_HANDLE_NAME, ..., PAGE_READWRITE, ..., "Global\\my_service") instead of shm_open("/my_service", O_RDWR, O_CREAT).
However, I have a few concerns, especially centered around the lifetime of this pagefile-backed mapping. I haven't found answers to these questions in the MSDN documentation:
Does the mapping persist across reboots?
If not, does the mapping disappear when all open handles to it are closed?
If not, is there a way to remove or clear the mapping? Doesn't need to be while it's in use.
If it does persist across reboots, or does disappear when unreferenced, or is not able to be reset manually, this method is useless to me.
Can you verify or find faults in these points, and/or recommend a different approach?
If there were a directory that were guaranteed to be cleaned out upon reboot, I could save data in a temporary file there, but it still wouldn't be ideal: under certain system loads, we are encountering file open/write failures (rare, under 0.01% of the time, but still happening), and this functionality is to be used in the logging path. I would like not to introduce any more file operations here.
The shared memory mapping would not persist across reboots and it will disappear when all of its handles are closed. A memory mapping object is a kernel object - they always get deleted when the last reference to them goes away, either explicitly via a CloseHandle or when the process containing the reference exits.
Try creating a registry key with RegCreateKeyEx with REG_OPTION_VOLATILE - the data will not preserved when the corresponding hive is unloaded. This will be at system shutdown for HKLM or user logoff for HKCU.
sounds like maybe you want serialization instead of shared memory? If that is indeed appropriate for your application, the way you serialize will depend on your language. If you're using c++, check out boost::serialize. C# undoubtedly has lots of serializations options (like java), if that's what you're using.