I am spawning few threads inside ioctl call to my driver. I am also assigning kernel affinity to my driver. I want to ensure one of the thread does not get scheduled out till a particular event is flagged by the other thread. Is there any way to not allow windows scheduler to context out my thread. Using _disable() may hang the system as event may take couple of seconds.
Environment is windows 7,64bit
Thanks,
What you are probably after is a spin lock. However this is probably a bad idea unless you can guarantee that your driver/application is always run on a multi-processor system, even then it is still very bad practice. On a single processor system if a thread spin locks then the other thread signalling the spin locked thread will never be scheduled and so can't signal your event. Spin locks are meant to be used sparingly and only when the lock is for a very short time, never a couple of seconds.
It sounds like you need to use an event or other signally mechanism to synchronise your threads and let the windows scheduler do its job. If you need to respond to an event very quickly then interrupts or a Deferred Procedure Call (DPC) could be used instead.
Related
I know that when a user thread acquires for a lock(like event, semaphore and so on), the kernel will change the thread's state to waiting so the thread will not be scheduled to run until the kernel finds that the lock is available.
My question is how does the kernel captures the state of these locks? By polling or notifying?
By notifying. Before the thread goes to sleep, it adds itself to the wakeup list for whatever kernel object corresponds to the thing it's waiting for.
This works precisely the same way all other waits work. Say, for example, the process does a blocking read on a file and the process has to sleep until the read completes. Or say the process accesses some code that hasn't been read in from disk yet. In all of these cases, the process is added to the appropriate wakeup notification scheme when it puts itself to sleep.
What you are asking is highly system specific and lock specific. For example, quality operating systems have lock management facilities that will detect deadlocks.
Some locks might be implemented as spin locks where there is no process hibernation and no operating system notification at all.
In the case where waiting suspends a process, all the operating system needs to keep track of is the lock itself. If a process releases the lock, the operating system can send a notification to all the waiting process—no poling necessary.
I wanted to know how I can I make the io do something like a thread.join() wait for all tasks to finish.
io_type->post( strand->wrap(boost::bind &somemethod,ptr,parameter)));
In the above code if 4 threads were initially launched this would give work to the next available thread. However I want to know how I could actually wait for all the threads to finish work. Like we do with threads.join().
If this really needs to be done, then you could setup a mutex or critical section to stop your io handlers from processing messages off of the socket. This would need to be activated from another thread. But, more importantly...
Perhaps you should rethink your design. The problem with having the io wait for other threads to finish is that the io would then be unresponsive. In general, not a good idea. I suspect that most developers working on networking software would not even consider it. If you are receiving messages that are not ready to be processed yet due to other processing that is going on, then consider storing them in a queue and process them on a different thread when the other threads have signaled that they have completed their work.
What system process is responsible for executing system call, when user process calls ‘system call’ and the CPU switches to supervisor mode?
Are system calls scheduled by thread scheduler (can CPU switch to executing another system call after getting interrupt)?
What system process is responsible for executing system call?
The system call wrapper(the function you call to perform the system call, yeah it's just a wrapper, not the actually System call) will take the parameters, pass them to the approperiate registers(or on stack, depends on implementation), next it will put the system call number you're requesting in the eax (assuming x86) and finally will call INT 0x80 assembly instruction which is basically telling the OS that it received an interrupt and this interrupt is a system call that needs to be served, which system call to serve is available in the eax and the parameters are in the registers.
(modern implementations stopped using INT because it's expensive in performance and now use SYSENTER and SYSEXIT; the above is still almost the same though)
From the perspective of the scheduler, it makes no difference if you perform a system call or not; the thing is, once you ask the OS for a service(via the x86 instruction INT or SYSENTER and SYSEXIT ) the CPU mode flag will change to a privileged set, then the kernel will perform the task you asked for on behalf of your process and once done, it sets the flag back and returns the execution to the next instruction.
So, from a scheduler point of view, the OS will see no difference when you execute a system call or anything else.
Few notes:
-What I mentioned above is a general description, I am not sure if Windows applies this but if it doesn't, it should be doing something of similar fashion.
-Many System Calls perform blocking tasks(like I/O handling); to make better CPU utilization if your process asks for a blocking system call, the scheduler will let your process wait in the wait-queue till what it requested is ready, meanwhile other processes run on the CPU BUT do not confuse this with anything, the OS did not 'schedule system calls'.
The scheduler's task is to organize tasks, and from its perspective the system call is just a routine that the process is executing.
A final note, some system calls are atomic which means they should be performed without any interruption to their execution, these system calls if interrupted, will be be asked to restart execution once the interrupt's cause is over; still this is far from the scheduling concept.
First question: it depends. Some system calls go to services which are already running (say a network call) as a process. Some system calls result in a new process getting created and then getting scheduled for execution.
Last question: yes windows is a multiprocessing system. The process scheduler handles when a thread runs, for how long, and hardware interrupts can end up causing the running process to release the CPU or a idle process that the hardware is now ready for to get the CPU.
In windows (at least > Win 7 but I think in the past it was true too) a lot of the system services run in processes called svchost. A good application for seeing what is running were is Process Explorer from sys internals. It is like task manager on steroids and will show you all the threads that a given process owns. For finer grained "I called this dos command what happened" details you'd probably want to use a debugging tool where you can step through your call. Generally though you don't have to concern yourself with these things, you make a system call the system knows you aren't ready to continue processing until whatever process is handling that request has returned. Your request might get the CPU right after your process releases it, it might get the CPU 2 days from now but as far as the OS is concerned (or your program should be concerned) it doesn't matter, execution stops and waits for a result unless you are running multithreaded and then it gets really complicated.
I'm new to Linux device drivers writing and I'm trying to make a device driver that handles an UART chip. For this I decided to use work ques as my bottom half processing because I have to use some semaphores when handling the data that I get from the UART chip.
A work queue handler that was scheduled earlier in an interrupt now gets executed and during it's execution it will sleep at a semaphore. During this time the interrupt handler is called again and schedules the same work queue handler. Will the work queue handler be executed again before the first execution of it finishes ?
Thanks.
The default behavior of work queues is to allow concurrent execution on different CPUs. There is a flag WQ_NON_REENTRANT that changes this behavior. More information can be found in this post http://lwn.net/Articles/403891/
But it seems that in recent kernels work queues are non-reentrant by default - see
http://lwn.net/Articles/511190
Every device driver book talks about not using functions that sleep in interrupt routines.
What issues occur by calling these functions from ISRs ?
A total lockdown of the kernel is the issue here. The kernel is in interrupt context when executing interrupt handlers, that is, the interrupt handler is not associated with any process (the current macro cannot be used).
If you are able to sleep, you would never be able to get back to the interrupted code, since the scheduler would not know how to get back to it.
Holding a lock in the interrupt handler, and then sleeping, allowing another process to run and then entering the interrupt handler again and trying to re-acquire the lock would deadlock the kernel.
If you try to read more about how the scheduling in the kernel works, you will soon realize why sleeping is a no go in certain contexts.