I am creating a kernel module for linux. I was wondering, how can I stop a process from being scheduled for a specified time? Is there a function in the sched.c that can do this? Is it possible to add a specfic task_struct to a wait queue for a certain defined period of time or use something like schedule_timeout for a specific process?
Thanks
Delaying process scheduling for a time is equivalent to letting it sleep. In drivers, this is often done with msleep() (common in work tasks), or for processes, by placing the process into interruptable sleep mode with
set_current_state(TASK_INTERRUPTABLE);
schedule_timeout(x*HZ);
The kernel will not schedule the task again until the timeout has expired or a signal is received.
Related
When using a globally named mutex to synchronize across two processes, and one of the two processes are killed (say in Task Manager, or due to a fault), the other process returns from WaitForSingleObject() with the appropriate error code and can continue.
When using a globally name semaphore, it does not release the waiting process if the other process is killed / terminated. WaitForSingleObject() will wait until it times out (which may be INFINITE or hours).
How do I stop WaitForSingleObject() from waiting when the other process is killed or terminated?
In this case, there is a single count on the semaphore used to control read/write requests of a shared buffer. The Requester signals the Provider to provide certain data, the Provider updates the buffer and signals back to the Requester that it can now read the buffer.
I suggest that you switch to using WaitForMultipleObjects and wait for the handle of the process that might get terminated (or thread if you want to do this within a single process) in addition to your semaphore handle. That way you can continue to use INFINITE timeouts. Just have to check the return value to see which object was signalled.
Also, I would consider a process terminating while holding a semaphore somewhat of a bug, particularly a semaphore used for actual inter-process communication.
Adding to the accepted answer.
I added logic if the waitms was going to be longer than some value maxwaitms then the requester/provider exchange the providers process id (GetCurrentProcessId()) before the long process. The requester opens a handle (OpenHandle()) to the provider process and waits on both the semaphore and the process handle to know when writing is done (or process terminated).
I am writing a kernel module that performs timing functions using an external clock. Basically, the module counts pulses from the clock, rolling over the count every so often. User processes can use an ioctl to ask to be woken up at a specific count; they then perform some task and invoke the same ioctl to wait until the next time the same count comes up. In this way they can execute periodically using this external timing.
I have created an array of wait_queue_head_ts, one for each available schedule slot (i.e. each "count", as described above). When a user process invokes the ioctl, I simply call sleep_on() with the ioctl argument specifying the schedule slot and thus the wait queue. When the kernel module receives a clock pulse and increments the count, it wakes up the wait queue corresponding to that count.
I know that it is considered bad practice to use sleep_on(), because there is potential for state to change between a test to see if a process should sleep, and the corresponding call to sleep_on(). But in this case I do not perform such a test before sleeping because the waking event is periodic. It doesn't matter if I "just miss" a waking event because another will come shortly (in fact, if the ioctl is invoked very close to the specified schedule slot, then something went wrong and I would prefer to wait until the next slot anyway).
I have looked at using wait_event_interruptible(), which is considered safer, but I do not know what to put for the condition argument that wait_event_interruptible requires. wait_event_interruptible will check this condition before sleeping, but I want it to always sleep when the ioctl is invoked. I could use a flag that I clear before sleeping and set before waking up, but I'm worried this might not work in the case that there are multiple processes in the wait queue - one process might finish and clear the flag before the next is woken up.
Am I right to be worried about this? Or are all processes in a wait_queue guaranteed to be woken up before any of them run (and could therefore clear the flag)? Is there a better way to go about implementing a system such as this one? Is it actually okay to just use sleep_on()? (If so, is there a version of sleep_on() that is interruptible?)
Interruptible version of sleep_on is interruptible_sleep_on. Note, that sleep-functions have been removed since kernel 3.15.
As for wait_event_interruptible, requirement I want it to always sleep when the ioctl is invoked. is uncommon for it. You may use a flag, but this flag should be per-process (or per-schedule slot). Or you may modify count for wait to be at least current_count + 1.
In such uncommon scenario, instead of macro wait_event_interruptible you may use blocks it consist of, and arrange them in the way you need. Generally, any waiting can be achived in that way.
I am developing an application in windows which should run a code just before the process terminates. i am okay writing a kernel module to achieve this. but what are the functions that i should hook into ?
To get the notification about the termination of process i am doing this.
HANDLE handle = OpenProcess(PROCESS_ALL_ACCESS, FALSE, 1234);
DWORD wait = WaitForSingleObject(handle, INFINITE);
// Some block of code here that does the business logic.
handleProcessTermination();
My problem is the target process exits before my function handleProcessTermination() completes. i want a way to stop the exit of process and run my logic.
You should be able to create a kernel driver that calls PsSetCreateProcessNotifyRoutineEx to create a callback routine for when processes start/end. Your callback will be called "just before the last thread to exit the process is destroyed."
This won't allow you to "stop" the process termination permanently, but does allow you to inject some code just prior to the process ending.
I think there is no way to postpone the termination of a process. Even stopping all threads of that process will not help since the killing of the process is done by the kernel.
Due to my own experience I assume that windows does the following on process termination:
Mark the process to be terminated
Terminate all threads of the process
Clean up (free memory, release handles, ...)
Terminate process
Once step 1. is done the process is doomed since the scheduler will not activate any of the threads of that process. Activating one of the threads may cause them to go berserk since the process is in a partly destroyed state (e.g. memory may be freed, handles destroyed, ...) which may cause serious trouble!
I don't think that there is a possibility to change that behavior without chaning parts of the kernel.
Side note: It would be an interresting thing to test if WaitForSingleObject(thread, ...) is signalled before WaitForSingleObject(process, ...).
I have a scenario, where one process should wait for a signal from another process, and this wait should be blocking wait, and as soon as it gets a signal, it should wake up.
However, with mechanisms like kill() or raise(), the first process goes to wait state, but periodically checks after a specified amount of time, whether the even/signal occurred or not, and decides to wait or go on.
My requirement is a bit stringent, I want that process should wake up at the same instant as signal is received.
Please suggest something.
This can be achieved using semaphore,mutex or conditional variable. Or You can write wait and signal function by your own and you can control the behavior of these as per need. For reference see here: IPC examples
IPC concept and Examples Mutex and Conditional Variables
i'm not sure about something.
when i use critical_section/mutex/semaphor in c++ for example , how does the busy_wait problem being prevented ?
what i mean is when a thread reaches a critical section and the critical section is occupied by other thread, what prevents the thread from wasting cycle time and wait for nothing ?
for example,
should i call TryEnterCriticalSection and check if the thread obtained ownership and otherwise call sleep(0) ?
i'm a bit perplexed
thanks
This is Windows specific, but Linux will be similar.
Windows has the concept of a ready queue of threads. These are threads that are ready to run, and will be run at some point on an available processor. Which threads are selected to run immediately is a bit complicated - threads can have different priorities, their priorities can be temporarily boosted, etc.
When a thread waits on a synchronization primitive like a CRITICAL_SECTION or mutex, it is not placed on the ready queue - Windows will not even attempt to run the thread and will run other threads if possible. At some point the thread will be moved back to the ready queue, for instance when the thread owning the CS or mutex releases it.
The thread is not going to be taking any system resources, because it will be marked as "waiting". As soon as the thread occupying the critical region finishes, it will send out a signal that will move the waiting thread to the ready queue.
These control structures stop the thread that can't enter from doing a busy wait by allowing it to sleep until an interrupt is generated by the thread that is in the critical section finishing execution. Because the thread is asleep it is not using processor cycles, so no busy_wait.