I read the book Linux Kernel Development, and have some problems with the example it gives in sleeping and waking up section.
DEFINE_WAIT(wait);
add_wait_queue(q, &wait);
while (!condition) {
prepare_to_wait(&q, &wait, TASK_INTERRUPTIBLE);
if (signal_pending(current))
/* handle signal */
schedule();
}
finish_wait(&q, &wait);
What will happen if a wake_up comes just before prepare_to_wait() and after the while condition? Will the wake_up be lost?
Yes, the wakeup will be lost.
prepare_to_wait() must be called before the condition is checked.
(This is what you will see in real code.)
Related
I got stuck in the interrupt part while learning AVR.
Datasheet says about RXCn flag:
"This flag bit is set when there are unread data in the receive buffer and cleared when the receive buffer is empty
(i.e., does not contain any unread data)."
and there is an example about getting a characters with uart
while ( !(UCSRnA & (1<<RXCn)) );
/* Get and return received data from buffer */
return UDRn;
Will it wait here forever until the data comes from the Uart? And will mcu not be able to do any other work because of "while(1);"?
I know this method is polling and I also know that there is an interrupt method but will the mcu be locked because of this?
As #AterLux already said the program will halt until data is recived there are some other possibilities to catch the data nonblocking e.g.:
char uart_get(char *data)
{
if (UCSRnA & (1<<RXCn) );
{
*data = UDRn;
return 1;
}
return 0;
}
If no data has been received you will get 0 and can continue with the program. If you should use interrupt handling or polling depends on your problem. With interrupt handling you can use for example a circular buffer to save received data and use it if you need it. if you are still waiting for one value polling is also an oppertunity.
Yes. It will wait forever while the condition (!(UCSRnA & (1<<RXCn))) is fulfiled. I.e. it will wait until UCSRnA has the bit RXCn set.
If the Global Interrupt Flag (I flag in SREG register) is not cleared (by calling cli(), or entering an interrupt handler) then interrupts still able to run, all the peripherals (counters, SPI, TWI, etc) continue to work, while in this cycle. Of course the program beneath the cycle will not execute.
I'm reading "Linux Kernel Development" by Robert Love and found the code below to wait for an event.
DEFINE_WAIT(wait);
add_wait_queue(q, &wait);
while (!condition) {
// What happens if condition is changed and wake_up() is called here ?
prepare_to_wait(&q, &wait, TASK_INTERRUPTIBLE);
if (signal_pending(current))
/* handle signal */
schedule();
}
finish_wait(&q, &wait);
My question is as in the code above. What happens if condition is changed and wake_up() is called after condition check but before prepare_to_wait?
My (probably wrong) interpretation here is that because prepare_to_wait makes the thread TASK_INTERRUPTIBLE and calls schedule() after the condition is changed, it sleeps forever (unless it gets signaled or another wake_up is called).
Yes, this code is actually raced, between prepare_to_wait and schedule calls it should be a check for the condition (and breaking if it is satisfied).
Funny enough, in the following description the book refers (at page 60) to the implementation of function inotify_read() in fs/notify/inotify/inotify_user.c file:
DEFINE_WAIT(wait);
...
while (1) {
prepare_to_wait(&group->notification_waitq,
&wait,
TASK_INTERRUPTIBLE);
if (<condition>) // very simplified form of checks
break;
if (signal_pending(current))
break;
schedule();
}
finish_wait(&group->notification_waitq, &wait);
...
which according to the author "follows the pattern":
This function follows the pattern laid out in our example. The main difference is that
it checks for the condition in the body of the while() loop, instead of in the while()
statement itself. This is because checking the condition is complicated and requires grabbing locks. The loop is terminated via break.
However, that code exhibits another pattern, which checks the condition between prepare_to_wait and schedule call. And that code is actually correct (race-free).
Also that code doesn't use add_wait_queue, which is superfluous in presence of prepare_to_wait.
In another book of the same author, Linux Driver Development (3d revision), usage of wait queues seems to be more accurate. See e.g. chapter 6, Advanced Char Driver Operations.
Recently I met a problem that my program seems waiting for an event handle and never return.
I call IcmpSendEcho2 to send ping request and have set the event parameter correctly. In normal situation if the ping response arrived or timeout the event handle should be singled by system. This works fine in most time but one day I found my program (some of the working threads) is halt after my computer wakes up from sleep state. I guess the root cause is that when the ping request is sent and before system signal the corresponding event handle then the system change to sleep state (no operation for a long time then laptop change to sleep state). After system wakes up by me the previous event handle got no chance to be singled.
Add the pseudo code is:
// a working thread
while (isWorking()) {
...
IcmpSendEcho2(hIcmpFile, hEvent, NULL, NULL, ..., 5000/*timeout is 5 sec*/)
...
WaitForsingleObject(hEvent, INFINITE); // hEvent should be signaled after
// response arrived or timeout (5sec)
// elapsed
//
...
}
I don't know if I'm wrong. I'm still investigating on this problem and post this issue here hope somebody who has experience the same problem could help me. Thank you in advance!
Lets consider a single processor scenario.
wait_event_interruptible() (or other wait APIs) wait in a loop until a certain condition is met.
Now, since linux has threads implemented as separate processes, I believe a false wake (where the wait_event* is woken up with the condition not met) is indicative of error in the program/driver.
Am I wrong? - Is there any valid scenario where such false wakes can happen and are used? In other words, why wait on the condition in a loop in wait_event* implementation?
A common use case for wait queues is with interrupts. Perhaps your kernel driver is currently waiting on three different conditions, each of which will be awoken with an interrupt.
This allows your kernel interrupt handler to just wake up all of the listeners, who can then determine among themselves if their particular condition has occurred, or if they should wake up.
Also, you can get spurious interrupts because interrupts can be shared and because of interrupts being deferred and coalesced.
Adding some code and what not to try to be more explicit.
I've written some code below that could be part of a kernel driver. The interrupt handler is simply going to wake up all of the listeners. However, not all of the listeners may actually be done. Both will be woken up by the same interrupt, but they will both look to see if their particular condition is done before continuing.
// Registered interrupt handler
static irqreturn_t interrupt_handler(void *private) {
struct device_handle *handle = private;
wake_up_all(&handle->wait);
}
// Some Kernel Thread
void program_a(struct device_handle * handle) {
wait_event_interruptible(&handle->wait, hardware_register & 0x1);
printk("program_a finished\n");
}
// Some other kernel thread
void program_b(struct device_handle * handle) {
wait_event_interruptible(&handle->wait, hardware_register & 0x2);
printk("program_b finished\n");
}
Code:
#define __wait_event(wq, condition) \
do { \
DEFINE_WAIT(__wait); \
\
for (;;) { \
prepare_to_wait(&wq, &__wait, TASK_UNINTERRUPTIBLE); \
if (condition) \
break; \
schedule(); \
} \
finish_wait(&wq, &__wait); \
} while (0)
(Besides the fact that the kernel is preemptive...)
I assume you're referring to the infinite 'for' loop above?
If so, the main reason it's there is this:
The code does not make any assumption about state once it's awoken. Just being awoken does not mean the event you were waiting upon has actually occurred; you must recheck. That's exactly what the loop achieves. If the 'if' condition comes true (which it should, in the normal case), it exits the loop, else (spurious wakeup) it puts itself to sleep again by calling schedule().
Even in the single processor scenario, the kernel is preemptive, i.e. the control can be at any moment pass to other thread/process so the behavior is the same as multi-processor.
Good discussion on the lost-wait problem is here:
http://www.linuxjournal.com/node/8144/print
In my C++ program I have a class CEvent with trigger and wait member functions based on pthreads (running on Linux). Implementation is quite obvious (i.e. many examples online) if there is one waiting process. However now I need to satisfy the requirement that multiple threads are waiting on the event and should ALL wake up reliably when trigger() is called. As a second condition, only threads that were waiting when trigger() was called should wake up.
My current code:
void CEvent::trigger() {
pthread_mutex_lock(&mutex);
wakeUp = true;
pthread_cond_broadcast(&condition)
pthread_mutex_unlock(&mutex);
wakeUp = false;
}
void CEvent::wait() {
pthread_mutex_lock(&mutex);
while (!wakeUp)
pthread_cond_wait(&condition, &mutex)
pthread_mutex_unlock(&mutex);
}
This seems to almost work, insofar as that all threads waiting wake up before I set wakeUp back to false. However between the broadcast and the reset of wakeUp, other (or the same) threads calling wait() will also wake up right away which is not acceptable. Putting wakeUp = false before the mutext unlocking prevents the threads from waking up.
My questions:
* When does pthread_cond_broadcast return? I.e. is there a guarantee it will only return after all threads have woken up or could it return before?
* Are there any recommended solutions to this problem?
Please disregard my previous bogus answer. There is a race between the time that the trigger thread unlocks the mutex (and thus frees the waiting threads), and then sets the wakeUp value. This means that another (not waiting) thread can come in, grab the mutex, and see a true value in wakeUp and exit without waiting. Another bug is that a thread which was waiting will wake up after wakeUp is reset and immediately resume waiting.
One way to resolve this is to use count - each thread that is waiting will increment the count, then the trigger will wait until that many threads have woken before resuming. You would then have to ensure that non-waiting threads are not allowed to start waiting until this had happened.
// wake up "waiters" count of waiting threads
void CEvent::trigger()
{
pthread_mutex_lock(&mutex);
// wakey wakey
wakeUp = true;
pthread_cond_broadcast(&condition);
// wait for them to awake
while (waiters>0)
pthread_cond_wait(&condition, &mutex);
// stop waking threads up
wakeUp = false;
// let any "other" threads which were ready to start waiting, do so
pthread_cond_broadcast(&condition);
pthread_mutex_unlock(&mutex);
}
// wait for the condition to be notified for us
void CEvent::wait()
{
pthread_mutex_lock(&mutex);
// wait for us to be allowed to start waiting
// we have to wait until any currrently being woken threads have gone
while (wakeUp)
pthread_cond_wait(&condition, &mutex);
// our turn to start waiting
waiters ++;
// waiting
while (!wakeUp)
pthread_cond_wait(&condition, &mutex);
// finished waiting, we were triggered
waiters --;
// let the trigger thread know we're done
pthread_cond_broadcast(&condition);
pthread_mutex_unlock(&mutex);
}