I'm reading "Linux Kernel Development" by Robert Love and found the code below to wait for an event.
DEFINE_WAIT(wait);
add_wait_queue(q, &wait);
while (!condition) {
// What happens if condition is changed and wake_up() is called here ?
prepare_to_wait(&q, &wait, TASK_INTERRUPTIBLE);
if (signal_pending(current))
/* handle signal */
schedule();
}
finish_wait(&q, &wait);
My question is as in the code above. What happens if condition is changed and wake_up() is called after condition check but before prepare_to_wait?
My (probably wrong) interpretation here is that because prepare_to_wait makes the thread TASK_INTERRUPTIBLE and calls schedule() after the condition is changed, it sleeps forever (unless it gets signaled or another wake_up is called).
Yes, this code is actually raced, between prepare_to_wait and schedule calls it should be a check for the condition (and breaking if it is satisfied).
Funny enough, in the following description the book refers (at page 60) to the implementation of function inotify_read() in fs/notify/inotify/inotify_user.c file:
DEFINE_WAIT(wait);
...
while (1) {
prepare_to_wait(&group->notification_waitq,
&wait,
TASK_INTERRUPTIBLE);
if (<condition>) // very simplified form of checks
break;
if (signal_pending(current))
break;
schedule();
}
finish_wait(&group->notification_waitq, &wait);
...
which according to the author "follows the pattern":
This function follows the pattern laid out in our example. The main difference is that
it checks for the condition in the body of the while() loop, instead of in the while()
statement itself. This is because checking the condition is complicated and requires grabbing locks. The loop is terminated via break.
However, that code exhibits another pattern, which checks the condition between prepare_to_wait and schedule call. And that code is actually correct (race-free).
Also that code doesn't use add_wait_queue, which is superfluous in presence of prepare_to_wait.
In another book of the same author, Linux Driver Development (3d revision), usage of wait queues seems to be more accurate. See e.g. chapter 6, Advanced Char Driver Operations.
Related
I have an ISR that's fired from a button press. The handler looks like this...
void IRAM_ATTR buttonIsrHandler(void *arg) {
xTaskResumeFromISR(buttonTaskHandle);
}
// `buttonTaskHandle` is set up as the handle for this task function...
void buttonTask(void *pvParameter) {
while (1) {
vTaskSuspend(NULL);
// ... my task code goes here...
}
}
When I'm in an ISR, I can't do certain things. For instance, calling ESP_LOGI() results in an error relating to disallowed memory access.
I was expecting those limitations to exist only within the buttonIsrHandler() function, but they also exist within buttonTask() given that I woke it up from an ISR.
How do I get out of an ISR so that I can do all my normal stuff? I could use something like a queue to do this, but that seems heavy weight. Is there an easier way? Would sending a task-notification from the ISR handler be any different? Any other suggestions?
As you can see in the documentation of xTaskResumeFromISR, such a use case is not recommended. Task notifications are designed and optimized for this exact use case. In your case, you'd want to use vTaskNotifyGiveFromISR.
As for "leaving the ISR", FreeRTOS will not call your task function from the ISR context. xTaskResumeFromISR and other functions simply update the state of the task so that it can run when its turn comes.
I am having very strange behavior in my Go code. The overall gist is that when I have
for {
if messagesRecieved == l {
break
}
select {
case result := <-results:
newWords[result.index] = result.word
messagesRecieved += 1
default:
// fmt.Printf("messagesRecieved: %v\n", messagesRecieved)
if i != l {
request := Request{word: words[i], index: i, thesaurus_word: results}
requests <- request
i += 1
}
}
}
the program freezes and fails to advance, but when I uncomment out the fmt.Printf command, then the program works fine. You can see the entire code here. does anyone know what's causing this behavior?
Go in version 1.1.2 (the current release) has still only the original (since initial release) cooperative scheduling of goroutines. The compiler improves the behavior by inserting scheduling points. Inferred from the memory model they are next to channel operations. Additionaly also in some well known, but intentionally undocumented places, such as where I/O occurs. The last explains why uncommenting fmt.Printf changes the behavior of your program. And, BTW, the Go tip version now sports a preemptive scheduler.
Your code keeps one of your goroutines busy going through the default select case. As there are no other scheduling points w/o the print, no other goroutine has a chance to make progress (assuming default GOMAXPROCS=1).
I recommend to rewrite the logic of the program in a way which avoids spinning (busy waiting). One possible approach is to use a channel send in the default case. As a perhaps nice side effect of using a buffered channel for that, one gets a simple limiter from that for free.
Lets consider a single processor scenario.
wait_event_interruptible() (or other wait APIs) wait in a loop until a certain condition is met.
Now, since linux has threads implemented as separate processes, I believe a false wake (where the wait_event* is woken up with the condition not met) is indicative of error in the program/driver.
Am I wrong? - Is there any valid scenario where such false wakes can happen and are used? In other words, why wait on the condition in a loop in wait_event* implementation?
A common use case for wait queues is with interrupts. Perhaps your kernel driver is currently waiting on three different conditions, each of which will be awoken with an interrupt.
This allows your kernel interrupt handler to just wake up all of the listeners, who can then determine among themselves if their particular condition has occurred, or if they should wake up.
Also, you can get spurious interrupts because interrupts can be shared and because of interrupts being deferred and coalesced.
Adding some code and what not to try to be more explicit.
I've written some code below that could be part of a kernel driver. The interrupt handler is simply going to wake up all of the listeners. However, not all of the listeners may actually be done. Both will be woken up by the same interrupt, but they will both look to see if their particular condition is done before continuing.
// Registered interrupt handler
static irqreturn_t interrupt_handler(void *private) {
struct device_handle *handle = private;
wake_up_all(&handle->wait);
}
// Some Kernel Thread
void program_a(struct device_handle * handle) {
wait_event_interruptible(&handle->wait, hardware_register & 0x1);
printk("program_a finished\n");
}
// Some other kernel thread
void program_b(struct device_handle * handle) {
wait_event_interruptible(&handle->wait, hardware_register & 0x2);
printk("program_b finished\n");
}
Code:
#define __wait_event(wq, condition) \
do { \
DEFINE_WAIT(__wait); \
\
for (;;) { \
prepare_to_wait(&wq, &__wait, TASK_UNINTERRUPTIBLE); \
if (condition) \
break; \
schedule(); \
} \
finish_wait(&wq, &__wait); \
} while (0)
(Besides the fact that the kernel is preemptive...)
I assume you're referring to the infinite 'for' loop above?
If so, the main reason it's there is this:
The code does not make any assumption about state once it's awoken. Just being awoken does not mean the event you were waiting upon has actually occurred; you must recheck. That's exactly what the loop achieves. If the 'if' condition comes true (which it should, in the normal case), it exits the loop, else (spurious wakeup) it puts itself to sleep again by calling schedule().
Even in the single processor scenario, the kernel is preemptive, i.e. the control can be at any moment pass to other thread/process so the behavior is the same as multi-processor.
Good discussion on the lost-wait problem is here:
http://www.linuxjournal.com/node/8144/print
Is it good idea to use MPI_Barrier() to synchronize data in-between iteration steps. Please see below pseudo code.
While(numberIterations< MaxIterations)
{
MPI_Iprobe() -- check for incoming data
while(flagprobe !=0)
{
MPI_Recv() -- receive data
MPI_Iprobe() -- loop if more data
}
updateData() -- update myData
for(i=0;i<N;i++) MPI_Bsend_init(request[i]) -- setup request
for(i=0;i<N;i++) MPI_Start(request[i]) -- send data to all other N processors
if(numberIterations = MaxIterations/2)
MPI_Barrier() -- wait for all processors -- CAN I DO THIS
numberIterations ++
}
Barriers should only be used if the correctness of the program depends on it. From your pseudocode, I can't tell if that's the case, but one barrier halfway through a loop looks very suspect.
Your code will deadlock, with or without a barrier. You receive in every rank before sending any data, so none of the ranks will ever get to a send call. Most applications will have a call such as MPI_Allreduce instead of a barrier after each iteration so all ranks can decide whether an error level is small enough, a task queue is empty, etc. and thus decide whether to terminate.
In this article http://static.msi.umn.edu/rreports/2008/87.pdf it says that you have to call MPI_Free_request() before MPI_Bsend_init().
in the last hours I've struggled with delegates and accessing Windows Forms controls (C++) where I've used this tutorial (the first thread safe method): http://msdn.microsoft.com/en-us/library/ms171728.aspx#Y190
Changing TextBoxes and Labels works perfectly but when I want to show or hide the whole GUI from another thread this fails.
I use the following methode (which is part of the GUI class):
System::Void UI::showUI(boolean value) {
if (this->InvokeRequired) {
SetTextDelegate^ d = gcnew SetTextDelegate(this, &UI::showUI);
this->Invoke(d, gcnew array<Object^> { value });
} else {
if (value == true)
this->Show();
else
this->Hide();
}
}
In the first call the if-clause is true so Invoke is called. But usually the showUI method should be called a second time automatically where the if-clause returns false, but this is not happening. So the GUI is neither shown nor hiden.
Is it necessary to show/hide the GUI with a delegate or can I do it from every possible thread? If a delegate is necessary, why is showUI not executed a second time?
Thanks,
Martin
edit: okay the name SetTextDelegate is not appropriate but this is not the point...
This is a pretty standard case of deadlock, not uncommon with Control::Invoke(). It can only proceed if the UI thread is not busy. Use Debug + Windows + Threads and double-click the Main thread. Look at the call stack to see what it is doing. The typical case is that it is blocking, waiting for the thread to finish the job. That will never happen since the thread can't complete until the Invoke() call returns.
Don't block the UI thread.
Consider using BackgroundWorker, its RunworkerCompleted event is nice to do stuff after the thread completes, removing the need to block.