I'm using epoll in a kernel module.
I'm creating a kthread on module_init, and I want to stop it on module_exit.
However, this thread is always almost waiting in epoll_wait.
In my exit function, how can I stop the created thread even if it's waiting in epoll_wait?
This is my attempt, but every now and then it hangs, I assume because the interrupt doesn't get to the thread before kthread_stop does.
struct task_struct *thread;
bool should_stop = false;
static int __init init(void) {
thread = kthread_run(run, NULL, "hora");
return 0;
}
static void __exit exit(void) {
l("end");
should_stop = true;
force_sig(SIGUSR1, thread);
l("kthread_stop");
kthread_stop(thread);
l("ended");
}
int run(void *data) {
...
while (true) {
int nfd = epoll_wait(epfd, events, MAX_EVENTS, -1);
l("got event");
if (should_stop) {
l("should_stop");
break;
}
...
}
...
l("closed");
set_current_state(TASK_INTERRUPTIBLE);
while (!kthread_should_stop()) {
schedule();
set_current_state(TASK_INTERRUPTIBLE);
}
set_current_state(TASK_RUNNING);
l("exiting");
return 0;
}
The output is:
end
kthread_stop
got event
should_stop
closed
exiting
ended
I've looked into eventfd as an alternative to sending a signal, but it's quite a bit of work (only a couple of eventfd functions are exported) and I don't know for sure whether it'll solve this.
Any insights and help much appreciated.
Related
I'm implementing flow control in a custom protocol in the linux kernel. When I receive an ACK, I want to remove the acked packets from the write queue. Here's some code
for(i = (ack->sequence - qp->first_unack); i>0&&sk->sk_write_queue.qlen>0; i++){
skb_del = skb_dequeue(&sk->sk_write_queue);
qp->first_unack++;
kfree_skb(skb_del);
}
I get a kernel freeze from this code. Everything works well however, when I comment out the kfree(skb_del). Any ideas why is this happening? How else can I free up the memory?
As the skbs are queued to the socket you can use already provided socket APIs;
sk_eat_skb(struct sock *sk, struct sk_buff *skb, bool copied_early) // copied_ealy = 0
For more details you can track tcp_recvmsg, there properly you will get the impementation flow
Moreove why you are using custom APIS from the queuing/dequeuing loop on your own. Just go through the include/net/sock.h I hope you will get necessary details
This is probably because of double freeing skb_del.
Theoretically, before calling kfree_skb(skb_del) you can check the value of skb_del->users by doing refcount_read(&skb_del->users), and if skb_del->users is 0, then it means that skb_del has already been freed.
In practice, the kfree_skb() function doesn't set skb_del->users to 0 when skb_del is finally released (due to some optimization considerations), so after skb_del will be release it would stay 1, and you won't be able to know if skb_del has been released or not.
If you are still curious if this is a double-free issue and you are fine with making some changes in the skbuff infrastructure (just for this investigation) then we need to modify some skbuff functions.
WARNING: It's very easy to cause the kernel to crash when playing with this function, so be careful. But these modification works (in this way I've found a double-free of skb). Keep in mind that this is a suggestion only for investigating the double-free issue, and I've no idea if these modifications will effect your system in the long-run.
We'll modify the following functions (based on kernel v5.9.1):
skb_unref() // from include/linux/skbuff.h
__kfree_skb() // from net/core/skbuff.c
kfree_skb() // from net/core/skbuff.c
consume_skb() // from net/core/skbuff.c
Original skb_unref()
static inline bool skb_unref(struct sk_buff *skb)
{
if (unlikely(!skb))
return false;
if (likely(refcount_read(&skb->users) == 1))
smp_rmb();
else if (likely(!refcount_dec_and_test(&skb->users)))
return false;
return true;
}
Modified skb_unref()
static inline bool skb_unref(struct sk_buff *skb)
{
if (unlikely(!skb))
return false;
if (likely(refcount_read(&skb->users) == 1)) {
smp_rmb();
refcount_set(&skb->users, 0);
} else if (likely(!refcount_dec_and_test(&skb->users))) {
return false;
}
return true;
}
Original __kfree_skb()
void __kfree_skb(struct sk_buff *skb)
{
skb_release_all(skb);
kfree_skbmem(skb);
}
Modified __kfree_skb()
void __kfree_skb(struct sk_buff *skb)
{
if (!skb_unref(skb))
return;
skb_release_all(skb);
kfree_skbmem(skb);
}
Original kfree_skb()
void kfree_skb(struct sk_buff *skb)
{
if (!skb_unref(skb))
return;
trace_kfree_skb(skb, __builtin_return_address(0));
__kfree_skb(skb);
}
Modified kfree_skb()
void kfree_skb(struct sk_buff *skb)
{
//if (!skb_unref(skb))
// return;
trace_kfree_skb(skb, __builtin_return_address(0));
__kfree_skb(skb);
}
Original consume_skb()
void consume_skb(struct sk_buff *skb)
{
if (!skb_unref(skb))
return;
trace_consume_skb(skb);
__kfree_skb(skb);
}
Modified consume_skb()
void consume_skb(struct sk_buff *skb)
{
//if (!skb_unref(skb))
// return;
trace_consume_skb(skb);
__kfree_skb(skb);
}
Good luck in the investigation.
May god will be with you.
Got a problem with the Atmel SAMB11 on an explained pro Devboard. I've loaded a quite simple example from Atmel, where a 32KHz Timer is initialized to wake up the µC from sleep and turn on a LED. Problem is, the controller doesn't sleep at all. It just activates the LED immediately and doesn't wait for an interrupt.
#include <asf.h>
// Callback Func to enable LED
static void aon_sleep_timer_callback(void)
{
gpio_pin_set_output_level(LED_0_PIN, LED_0_ACTIVE);
}
//Configure LED
static void configure_gpio_pins(void)
{
struct gpio_config config_gpio_pin;
gpio_get_config_defaults(&config_gpio_pin);
config_gpio_pin.direction = GPIO_PIN_DIR_OUTPUT;
gpio_pin_set_config(LED_0_PIN, &config_gpio_pin);
gpio_pin_set_output_level(LED_0_PIN, LED_0_INACTIVE);
}
// Configure Timer with 10sec to overflow
static void configure_aon_sleep_timer(void)
{
struct aon_sleep_timer_config config_aon_sleep_timer;
aon_sleep_timer_get_config_defaults(&config_aon_sleep_timer);
config_aon_sleep_timer.counter = 320000; // Wait about 10sec
aon_sleep_timer_init(&config_aon_sleep_timer);
}
// Configure Callback and enable Interrupt
static void configure_aon_sleep_timer_callback(void)
{
aon_sleep_timer_register_callback(aon_sleep_timer_callback);
NVIC_EnableIRQ(AON_SLEEP_TIMER_IRQn);
}
int main(void)
{
// Setup Clock, LED and Timer
system_clock_config(CLOCK_RESOURCE_XO_26_MHZ, CLOCK_FREQ_26_MHZ);
configure_gpio_pins();
configure_aon_sleep_timer();
configure_aon_sleep_timer_callback();
// wait for timer to be active
while(!aon_sleep_timer_sleep_timer_active());
// Go to sleep
asm volatile ("wfi");
asm volatile ("nop");
// Enable LED immediately if sleep doesn't work
gpio_pin_set_output_level(LED_0_PIN, LED_0_ACTIVE);
while (true) {}
}
Code seems self-explanatory, but the WFI command doesn't work here. Anyone can help?
Just to add to the answer by Prestige Worldwide.
Make sure AO_GPIO0/1/2 are low (pulldown advised), and no AON Sleep Timer interrupt is occuring, as these will wakeup the SAMB11 from ULP.
Also note that the ULP mode does not seem to work as expected while running a debug session over SWD.
I had all kinds of weirds behaviour when running the debug and sleep/waking up, but no problems at all when running the same code while not debugging. Note this was using Atmel ICE. The Xplored board contains EDBG, and this debugger seems to work ok with ULP.
The resume callback has never fired for me, maybe a bug in the ASF. But I do not need it as I can setup all GPIO/devices after the platform wait.
The WFI call works, it just receives an interrupt almost immediately after it is called, which causes the WFI call to stop blocking and then execution continues to the next line.
You could safely remove everything below // Go to sleep which would allow the main function to return. The AON timer would still execute its callback. However, there are a couple of potential downsides to this approach:
This would not allow the SAMB11 to transition to a lower power mode.
This removes your while-loop at the end of main. In its current state, the while-loop isn't needed, but you might have plans to add code to it later.
Here is an example that configures the AON, configures the SAMB11 to use low power modes, and then loops waiting for platform and/or BLE events. Currently there are no events for the loop to receive. If you want the loop to receive events then you could modify the AON callback to post an event with the at_ble_event_user_defined_post function or modify main() to configure the BLE module before entering the loop.
Use the ASF Wizard to add any of the BLE modules to your project in order to compile this example.
#include <asf.h>
#include "platform.h"
// Configure LED
static void configure_gpio_pins(void)
{
struct gpio_config config_gpio_pin;
gpio_get_config_defaults(&config_gpio_pin);
config_gpio_pin.direction = GPIO_PIN_DIR_OUTPUT;
gpio_pin_set_config(LED_0_PIN, &config_gpio_pin);
gpio_pin_set_output_level(LED_0_PIN, LED_0_INACTIVE);
}
// Callback Func to toggle LED
static bool led_is_on = false;
static void aon_sleep_timer_callback(void)
{
configure_gpio_pins();
if(led_is_on) {
gpio_pin_set_output_level(LED_0_PIN, LED_0_INACTIVE);
led_is_on = false;
} else {
gpio_pin_set_output_level(LED_0_PIN, LED_0_ACTIVE);
led_is_on = true;
}
}
// Configure Timer to fire periodically
static void configure_aon_sleep_timer(void)
{
struct aon_sleep_timer_config config_aon_sleep_timer;
aon_sleep_timer_get_config_defaults(&config_aon_sleep_timer);
config_aon_sleep_timer.counter = 32000; // Wait about 1 sec
config_aon_sleep_timer.mode = AON_SLEEP_TIMER_RELOAD_MODE;
aon_sleep_timer_init(&config_aon_sleep_timer);
}
// Configure Callback and enable Interrupt
static void configure_aon_sleep_timer_callback(void)
{
aon_sleep_timer_register_callback(aon_sleep_timer_callback);
NVIC_EnableIRQ(AON_SLEEP_TIMER0_IRQn);
}
int main(void)
{
// Setup Clock
system_clock_config(CLOCK_RESOURCE_XO_26_MHZ, CLOCK_FREQ_26_MHZ);
plf_drv_status plf_status;
if((plf_status = platform_driver_init()) == STATUS_SUCCESS) {
// Setup LED and Timer
configure_gpio_pins();
configure_aon_sleep_timer();
configure_aon_sleep_timer_callback();
// wait for timer to be active
while(!aon_sleep_timer_sleep_timer_active());
// Go to sleep
release_sleep_lock();
while(true) {
// Replace platform_event_wait with at_ble_event_get if you would like to read the received event.
plf_status = platform_event_wait(0);
}
}
}
Regarding WFI, the following example shows how to turn off most of the interrupts and use WFI to block main(). The LED will toggle every time an interrupt is received. I don't recommend using this as I'm not sure why these interrupts are enabled initially. This is just intended to show how WFI can block on a SAMB11.
#include <asf.h>
#include "platform.h"
// Configure LED
static void configure_gpio_pins(void)
{
struct gpio_config config_gpio_pin;
gpio_get_config_defaults(&config_gpio_pin);
config_gpio_pin.direction = GPIO_PIN_DIR_OUTPUT;
gpio_pin_set_config(LED_0_PIN, &config_gpio_pin);
gpio_pin_set_output_level(LED_0_PIN, LED_0_INACTIVE);
}
// Callback Func to toggle LED
static bool led_is_on = false;
static void toggle_led(void)
{
configure_gpio_pins();
if(led_is_on) {
gpio_pin_set_output_level(LED_0_PIN, LED_0_INACTIVE);
led_is_on = false;
} else {
gpio_pin_set_output_level(LED_0_PIN, LED_0_ACTIVE);
led_is_on = true;
}
}
int main(void)
{
// Setup Clock
system_clock_config(CLOCK_RESOURCE_XO_26_MHZ, CLOCK_FREQ_26_MHZ);
// Clear all interrupts.
NVIC->ICER[0] = 0xFFFFFFFF;
// During testing, interrupts were received about once per second; stopped receiving interrupts (LED stopped flashing) after about 2 minutes.
int loop_count = 0;
while(true) {
__WFI();
toggle_led();
}
}
I am trying to mix boost signals with asio to do a dispatch based handler invocation. when the post method is invoked from a thread the io_service::run exits immediately, the callback handled to post is never invoked, callback is a C++11 lambda routine. I am pasting the code for more analysis.
#include<iostream>
#include<thread>
#include<boost/signals2/signal.hpp>
#include<boost/asio.hpp>
static boost::asio::io_service svc;
static boost::signals2::signal<void(std::string)> textEntered;
static void
handleInputText(std::string text)
{
std::cout<<"handleInputText()"<<" text provided: "<<text;
return;
}
static void
worker()
{
sleep(2);
svc.post([](){
std::cout<<"\nRaising signal.";
std::string hello("hello world");
textEntered(hello);
});
return;
}
int main(int ac, char **av)
{
try
{
textEntered.connect(&handleInputText);
std::thread w(std::bind(&worker));
svc.run();
w.join();
}
catch(std::exception &ex)
{
std::cerr<<"main() exited with exception:"<<ex.what();
}
return 0;
}
You don't actually post any work to the service.
You start a thread that may eventually post work, but the main thread has already exited by that time.
Either, run the ioservice on the thread or make sure it has io_service::work
Here's a fix with a dedicated service thread and a work item:
Live On Coliru
#include<boost/asio.hpp>
#include<iostream>
#include<boost/asio.hpp>
#include<boost/signals2.hpp>
#include<boost/thread.hpp>
#include<boost/make_shared.hpp>
static boost::asio::io_service svc;
static boost::shared_ptr<boost::asio::io_service::work> work_lock;
static boost::signals2::signal<void(std::string)> textEntered;
static void
handleInputText(std::string text)
{
std::cout<<"handleInputText()"<<" text provided: "<<text;
return;
}
static void
worker()
{
sleep(2);
svc.post([](){
std::cout<<"\nRaising signal.";
std::string hello("hello world");
textEntered(hello);
});
return;
}
int main()
{
try
{
work_lock = boost::make_shared<boost::asio::io_service::work>(svc);
textEntered.connect(&handleInputText);
boost::thread_group tg;
tg.create_thread(boost::bind(&boost::asio::io_service::run, &svc));
tg.create_thread(&worker);
boost::this_thread::sleep_for(boost::chrono::seconds(3));
work_lock.reset();
tg.join_all();
}
catch(std::exception &ex)
{
std::cerr<<"main() exited with exception:"<<ex.what();
}
return 0;
}
Prints:
Raising signal.handleInputText() text provided: hello world
I have to write a program that works with threads in concurrency, for processing OpenCV Mat images. Every thread pick an image from a queue, processing it and put result to another queue. I use a thread safe template queue (as you can see in code) of Mat images.
But the strange behaviour of threads is: If I launch program multiple times, every time I obtain a different result in numbers of elaborations of single thread (the counter "add" that I've inserted to monitoring number of images processing by single threads).
The first thread (zero) always doing all it's elaborations (in this example, 10) but the rest of threads, don't. Sometimes every thread doing 10 elaborations, sometimes 3, some times 5...2..with the new changes (CONDITION VARIABLES and CRITICAL SECTION) threads doing only 1 operation.
I don't know where is the problem...why this happened.
I past here my code, and ask you to check it and tell me in your opinion, what's the problem...I'm desperate.
This is the code:
#include <opencv\cv.h>
#include <opencv\highgui.h>
#include <opencv2\highgui\highgui.hpp>
#include <stdio.h>
#include <stdlib.h>
#include <windows.h>
#include <process.h>
#include <queue>
using namespace std;
using namespace cv;
/*thread safe queue*/
template<typename T>
class coda_concorr
{
private:
std::queue<T> la_coda;
HANDLE mutex;
public:
bool elemento;
coda_concorr()
{
mutex = CreateMutex(NULL,FALSE,NULL);
}
~coda_concorr()
{}
void push(T& data)
{
WaitForSingleObject(mutex,INFINITE);
la_coda.push(data);
ReleaseMutex(mutex);
}
bool vuota() const
{
WaitForSingleObject(mutex,INFINITE);
bool RetCode = la_coda.empty();
ReleaseMutex(mutex);
return RetCode;
}
bool try_pop(T& popped)
{
WaitForSingleObject(mutex,INFINITE);
while (la_coda.empty()){
ReleaseMutex(mutex);
return false;
}
WaitForSingleObject(mutex,INFINITE);
popped = la_coda.front();
la_coda.pop();
ReleaseMutex(mutex);
return true;
}
};
struct Args
{
coda_concorr<cv::Mat> in;
coda_concorr<cv::Mat> *out; //puntatore a coda successiva
};
CONDITION_VARIABLE NonVuoto1;
CONDITION_VARIABLE NonVuoto2;
CONDITION_VARIABLE NonVuoto3;
CONDITION_VARIABLE NonVuoto4;
CRITICAL_SECTION Lock1;
CRITICAL_SECTION Lock2;
CRITICAL_SECTION Lock3;
CRITICAL_SECTION Lock4;
bool stop;
//initial populating queue
void puts (void* param){
Args* arg = (Args*)param;
int i=0;
Mat image;
while(!arg->in.vuota()){
arg->in.try_pop(image);
arg->out->push(image);
i++;
WakeConditionVariable(&NonVuoto1);
}
//fine
cout<<endl<<"Thread (PUSH) terminato con "<<i<<" elaborazioni."<<endl;
WakeConditionVariable(&NonVuoto1);
_endthread();
}
//grey funct
void grey (void *param){
Mat temp1,temp2;
int add = 0;
Args* arg = (Args*)param;
while(true){
EnterCriticalSection(&Lock1);
//se vuoto
while(arg->in.vuota() && !stop){
SleepConditionVariableCS(&NonVuoto1,&Lock1,INFINITE);
}
if(stop==true){
LeaveCriticalSection(&Lock1);
break;
}
arg->in.try_pop(temp1);
cvtColor(temp1,temp2,CV_BGR2GRAY);
arg->out->push(temp2);
add++;
cout<<endl<<"grey ha fatto: "<<add<<endl;
LeaveCriticalSection(&Lock1);
WakeConditionVariable(&NonVuoto2);
}
//fine
cout<<endl<<"Thread (GREY) terminato con "<<add<<" elaborazioni."<<endl;
_endthread();
}
//threshold funct
void soglia(void *param){
Mat temp1a,temp2a;
int add=0;
Args* arg = (Args*)param;
while(true){
EnterCriticalSection(&Lock2);
while(arg->in.vuota() && stop == false){
SleepConditionVariableCS(&NonVuoto2,&Lock2,INFINITE);
}
if(stop==true){
LeaveCriticalSection(&Lock2);
break;
}
arg->in.try_pop(temp1a);
threshold(temp1a,temp2a,128,255,THRESH_BINARY);
arg->out->push(temp2a);
add++;
LeaveCriticalSection(&Lock2);
WakeConditionVariable(&NonVuoto3);
cout<<endl<<"soglia ha fatto: "<<add<<endl;
}
//fine
cout<<endl<<"Thread (SOGLIA) terminato con "<<add<<" elaborazioni."<<endl;
_endthread();
}
//erode/dilate funct
void finitura(void *param){
Mat temp1b,temp2b,temp2c;
int add = 0;
Args* arg = (Args*)param;
//come consumatore
while(true){
EnterCriticalSection(&Lock3);
while(arg->in.vuota() && stop == false){
SleepConditionVariableCS(&NonVuoto3,&Lock3,INFINITE);
}
if(stop==TRUE){
LeaveCriticalSection(&Lock3);
break;
}
arg->in.try_pop(temp1b);
erode(temp1b,temp2b,cv::Mat());
dilate(temp2b,temp2c,Mat());
arg->out->push(temp2c);
add++;
LeaveCriticalSection(&Lock3);
WakeConditionVariable(&NonVuoto4);
cout<<endl<<"erode ha fatto: "<<add<<endl;
}
//fine
cout<<endl<<"Thread (ERODE) terminato con "<<add<<" elaborazioni."<<endl;
_endthread();
}
//contour funct
void contorno (void *param){
Mat temp;
int add=0;
Args* arg = (Args*)param;
//come consumatore
while(true){
EnterCriticalSection(&Lock4);
while(arg->in.vuota() && stop == false){
SleepConditionVariableCS(&NonVuoto4,&Lock4,INFINITE);
}
if(stop==TRUE){
LeaveCriticalSection(&Lock4);
break;
}
//esegue pop
arg->in.try_pop(temp);
//trova i contorni
vector<vector<Point>> contorni;
findContours(temp,contorni,CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
//disegna i contoni in un'immagine
Mat dst(temp.size(), CV_8UC3, Scalar(0,0,0));
Scalar colors[3];
colors[0] = Scalar(255,0,0);
colors[1] = Scalar(0,255,0);
colors[2] = Scalar(0,0,255);
for (size_t idx = 0; idx < contorni.size(); idx++){
drawContours(dst,contorni,idx,colors[idx %3]);
}
//come produttore
arg->out->push(dst);
add++;
cout<<endl<<"cont ha fatto: "<<add<<endl;
LeaveCriticalSection(&Lock4);
}
cout<<endl<<"Thread (CONTOUR) terminato con "<<add<<" elaborazioni."<<endl;
_endthread();
}
//main
int main()
{
coda_concorr<cv::Mat> ingresso;
coda_concorr<cv::Mat> uscita;
InitializeConditionVariable(&NonVuoto1);
InitializeConditionVariable(&NonVuoto2);
InitializeConditionVariable(&NonVuoto3);
InitializeConditionVariable(&NonVuoto4);
InitializeCriticalSection(&Lock1);
InitializeCriticalSection(&Lock2);
InitializeCriticalSection(&Lock3);
InitializeCriticalSection(&Lock4);
LARGE_INTEGER count1, count2, freq;
double elapsed;
Mat temp[10];
Mat out;
//dichiarazione code
Args dati0,dati1,dati2,dati3,dati4;
//avvio contatori
QueryPerformanceFrequency(&freq);
QueryPerformanceCounter (&count1);
for(int i=0;i<10;i++){
temp[i] = imread("C:/OPENCV/Test/imgtest/bird1.jpg",1);
ingresso.push(temp[i]);
}
//next queue pointer
dati0.in=ingresso;
dati0.out=&dati1.in;
dati1.out=&dati2.in;
dati2.out=&dati3.in;
dati3.out=&dati4.in;
dati4.out=&uscita;
//handle
HANDLE handle0,handle1,handle2,handle3,handle4;
//start threads
handle0 = (HANDLE) _beginthread(puts,0,&dati0);
handle1 = (HANDLE) _beginthread(grey,0,&dati1);
handle2 = (HANDLE) _beginthread(soglia,0,&dati2);
handle3 = (HANDLE) _beginthread(finitura,0,&dati3);
handle4 = (HANDLE) _beginthread(contorno,0,&dati4);
cout<<endl<<"..Join dei threads..."<<endl;
//join
WaitForSingleObject(handle0,INFINITE);
WaitForSingleObject(handle1,INFINITE);
WaitForSingleObject(handle2,INFINITE);
WaitForSingleObject(handle3,INFINITE);
WaitForSingleObject(handle4,INFINITE);
//chiusura contatori
QueryPerformanceCounter (&count2);
CloseHandle(handle0);
CloseHandle(handle1);
CloseHandle(handle2);
CloseHandle(handle3);
CloseHandle(handle4);
elapsed = (count2.QuadPart - count1.QuadPart) * 1000.0 / freq.QuadPart;
cout <<endl<<"Tempo di esecuzione approssimativo: " <<elapsed<<" ms."<<endl;
system("PAUSE");
return 0;
}
If previuos thread put all it's images on queue, why next thread don't do the same?
I'm on Windows 7 64bit with Visual C++ 2010 OpenCV 2.4.4
Please, help me find where is the problem...
It seems what you want to implement is similar to an assembly-line work in a factory where everyone does his job and toss it to the next worker till all the work is done. Please correct me if I am wrong. The idiom on each of your thread function is
void dowork(){
while(noinput()){
sleep(0.01);
}
while(getSomeInput()){
processInput();
queueResult();
}
displayAmountOfWorkDone();
}
You are successful in providing mutual exclusion with your mutex. The issue with your design is that once a thread has observed his input queue to be non empty, he consumes all the work and then exits. Due to thread scheduling and to the processing time processInput(), a worker can consume his inputs at a higher rate than the worker before him can produce. For instance taking the queue between init and datit, this is possible:
datit: see 0, sleep
init : see 10 - add 1
datit: see 1, process
datit: see 0, exit and outputs 1
init : add 2
init : add 3
init : add 4
....
init : add 10
You need to change the design. There should be a different mechanism to signal that the work is over. Right now you are using the amount of inputs that a thread can observe. A quick and dirty fix would be to give to each thread the amount of inputs he is expected to process, and rewrite the algorithm as
void dowork(){
while(workIsnotDone()){//local test, no lock
if(getSomeInput()){
processInput();
queueResult();
updateWorkProgress();//local operation
}
else{
sleep(0.01);
}
}
displayAmountOfWorkDone();
}
A better alternative would be to set up the class coda_concorr as a producer-consumer mechanism . For that you can add a condition variable. Each thread is a consumer on one queue and a producer on another. You can also add a field to explicitly specify something like no more inputs. Take a look at this other question on SO
I'm writing a UDP server that currently receives data from UDP wraps it up in an object and places them into a concurrent queue. The concurrent queue is the implementation provided here: http://www.justsoftwaresolutions.co.uk/threading/implementing-a-thread-safe-queue-using-condition-variables.html
A pool of worker threads pull data out of the queue for processing.
The queue is defined globally as:
static concurrent_queue<boost::shared_ptr<Msg> > g_work_queue_;
Now the problem I'm having is that if I simply write a function to produce data and insert it into the queue and create some consumer threads to pull them out it works fine.
But the moment I add my UDP based producer the worker threads stop being notified of the arrival of data on the queue.
I've tracked the issue down to the end of the push function in concurrent_queue.
Specifically the line: the_condition_variable.notify_one();
Does not return when using my network code.
So the problem is related to the way I've written the networking code.
Here is what it looks like.
enum
{
MAX_LENGTH = 1500
};
class Msg
{
public:
Msg()
{
static int i = 0;
i_ = i++;
printf("Construct ObbsMsg: %d\n", i_);
}
~Msg()
{
printf("Destruct ObbsMsg: %d\n", i_);
}
const char* toString() { return data_; }
private:
friend class server;
udp::endpoint sender_endpoint_;
char data_[MAX_LENGTH];
int i_;
};
class server
{
public:
server::server(boost::asio::io_service& io_service)
: io_service_(io_service),
socket_(io_service, udp::endpoint(udp::v4(), PORT))
{
waitForNextMessage();
}
void server::waitForNextMessage()
{
printf("Waiting for next msg\n");
next_msg_.reset(new Msg());
socket_.async_receive_from(
boost::asio::buffer(next_msg_->data_, MAX_LENGTH), sender_endpoint_,
boost::bind(&server::handleReceiveFrom, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void server::handleReceiveFrom(const boost::system::error_code& error, size_t bytes_recvd)
{
if (!error && bytes_recvd > 0) {
printf("got data: %s. Adding to work queue\n", next_msg_->toString());
g_work_queue.push(next_msg_); // Add received msg to work queue
waitForNextMessage();
} else {
waitForNextMessage();
}
}
private:
boost::asio::io_service& io_service_;
udp::socket socket_;
udp::endpoint sender_endpoint_;
boost::shared_ptr<Msg> next_msg_;
}
int main(int argc, char* argv[])
{
try{
boost::asio::io_service io_service;
server s(io_service);
io_service.run();
catch(std::exception& e){
std::err << "Exception: " << e.what() << std::endl;
}
return 0;
}
Now I've found that if handle_receive_from is able to return then notify_one() in concurrent_queue returns. So I think it's because I have a recursive loop.
So what's the correct way to start listening for new data? and is the async udp server example flawed as I based it off what they were already doing.
EDIT: Ok the issue just got even weirder.
What I haven't mentioned here is that I have a class called processor.
Processor looks like this:
class processor
{
public:
processor::processor(int thread_pool_size) :
thread_pool_size_(thread_pool_size) { }
void start()
{
boost::thread_group threads;
for (std::size_t i = 0; i < thread_pool_size_; ++i){
threads.create_thread(boost::bind(&ObbsServer::worker, this));
}
}
void worker()
{
while (true){
boost::shared_ptr<ObbsMsg> msg;
g_work_queue.wait_and_pop(msg);
printf("Got msg: %s\n", msg->toString());
}
}
private:
int thread_pool_size_;
};
Now it seems that if I extract the worker function out on it's own and start the threads
from main. it works! Can someone explain why a thread functions as I would expect outside of a class, but inside it's got side effects?
EDIT2: Now it's getting even weirder still
I pulled out two functions (exactly the same).
One is called consumer, the other worker.
i.e.
void worker()
{
while (true){
boost::shared_ptr<ObbsMsg> msg;
printf("waiting for msg\n");
g_work_queue.wait_and_pop(msg);
printf("Got msg: %s\n", msg->toString());
}
}
void consumer()
{
while (true){
boost::shared_ptr<ObbsMsg> msg;
printf("waiting for msg\n");
g_work_queue.wait_and_pop(msg);
printf("Got msg: %s\n", msg->toString());
}
}
Now, consumer lives at the top of the server.cpp file. I.e. where our server code lives as well.
On the other hand, worker lives in the processor.cpp file.
Now I'm not using processor at all at the moment. The main function now looks like this:
void consumer();
void worker();
int main(int argc, char* argv[])
{
try {
boost::asio::io_service io_service;
server net(io_service);
//processor s(7);
boost::thread_group threads;
for (std::size_t i = 0; i < 7; ++i){
threads.create_thread(worker); // this doesn't work
// threads.create_thread(consumer); // THIS WORKS!?!?!?
}
// s.start();
printf("Server Started...\n");
boost::asio::io_service::work work(io_service);
io_service.run();
printf("exiting...\n");
} catch (std::exception& e) {
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
Why is it that consumer is able to receive the queued items, but worker is not.
They are identical implementations with different names.
This isn't making any sense. Any ideas?
Here is the sample output when receiving the txt "Hello World":
Output 1: not working. When calling worker function or using the processor class.
Construct ObbsMsg: 0
waiting for msg
waiting for msg
waiting for msg
waiting for msg
waiting for msg
waiting for msg
Server Started...
waiting for msg
got data: hello world. Adding to work queue
Construct ObbsMsg: 1
Output 2: works when calling the consumer function which is identical to the worker function.
Construct ObbsMsg: 0
waiting for msg
waiting for msg
waiting for msg
waiting for msg
waiting for msg
waiting for msg
Server Started...
waiting for msg
got data: hello world. Adding to work queue
Construct ObbsMsg: 1
Got msg: hello world <----- this is what I've been wanting to see!
Destruct ObbsMsg: 0
waiting for msg
To answer my own question.
It seems the problem is to do with the declaration of g_work_queue;
Declared in a header file as: static concurrent_queue< boost::shared_ptr > g_work_queue;
It seems that declaring it static is not what I want to be doing.
Apparently that creates a separate queue object for each compiled .o file and obviously
separate locks etc.
This explains why when the queue was being manipulated inside the same source file
with a consumer and producer in the same file it worked.
But when in different files it did not because the threads were actually waiting on different objects.
So I've redeclared the work queue like so.
-- workqueue.h --
extern concurrent_queue< boost::shared_ptr<Msg> > g_work_queue;
-- workqueue.cpp --
#include "workqueue.h"
concurrent_queue< boost::shared_ptr<Msg> > g_work_queue;
Doing this fixes the problem.