async_write_some callback not called after delay - boost

My callback for async_write_some is not called after a one second sleep. If I am starting an io_service worker thread for every write, why is the callback not being called?
header
boost::system::error_code error_1;
boost::shared_ptr <boost::asio::io_service> io_service_1;
boost::shared_ptr <boost::asio::ip::tcp::socket> socket_1;
connect
void eth_socket::open_eth_socket (void)
{
// 1. reset io services
io_service_1.reset();
io_service_1 = boost::make_shared <boost::asio::io_service> ();
// 2. create endpoint
boost::asio::ip::tcp::endpoint remote_endpoint(
boost::asio::ip::address::from_string("10.0.0.3"),
socket_1_port
);
// 3. reset socket
socket_1.reset(new boost::asio::ip::tcp::socket(*io_service_1));
// 4. connect socket
socket_1->async_connect(remote_endpoint,
boost::bind(
&eth_socket::socket_1_connect_callback,
this, boost::asio::placeholders::error
)
);
// 5. start io_service_1 run thread after giving it work
boost::thread t(boost::bind(&boost::asio::io_service::run, *&io_service_1));
return;
}
write
void eth_socket::write_data (std::string data)
{
// 1. check socket status
if (!socket_1->is_open())
{
WARNING << "socket_1 is not open";
throw -3;
}
// 2. start asynchronous write
socket_1->async_write_some(
boost::asio::buffer(data.c_str(), data.size()),
boost::bind(
&eth_socket::socket_1_write_data_callback,
this, boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred
)
);
// 3. start io_service_1 run thread after giving it work
boost::thread t(boost::bind(&boost::asio::io_service::run, *&io_service_1));
return;
}
callback
void eth_socket::socket_1_write_data_callback (const boost::system::error_code& error, size_t bytes_transferred)
{
// 1. check for errors
if (error)
{
ERROR << "error.message() >> " << error.message().c_str();
return;
}
if (socket_1.get() == NULL || !socket_1->is_open())
{
WARNING << "serial_port_1 is not open";
return;
}
INFO << "data written to 10.0.0.3:1337 succeeded; bytes_transferred = " << bytes_transferred;
return;
}
test
open_eth_socket();
write_data("Hello"); // callback called
write_data("Hello"); // callback called
write_data("Hello"); // callback called
sleep(1);
write_data("Hello"); // callback not called after sleep

boost::thread t(boost::bind(&boost::asio::io_service::run, *&io_service_1));
That's weird for a number of reasons.
You should not "run" io_services for each operation. Instead, run them steadily while operations may be posted. Optionally use io_service::work to prevent run from returning.
You should not (have to) create threads for each operation. If anything, it's recipe for synchronization issues (Why do I need strand per connection when using boost::asio?)
When running io_service again after it returned (without error) you should call reset() first, as per documentation (Why must io_service::reset() be called?)
You destruct a non-detached thread - likely before it had completed. If you had used std::thread this would even have caused immediate abnormal program termination. It's bad practice to not-join non-detached threads (and I'd add it's iffy to use detached threads without explicit synchronization on thread termination). See Why is destructor of boost::thread detaching joinable thread instead of calling terminate() as standard suggests?
I'd add to these top-level concerns
the smell from using names like socket_1 (just call it socket_ and instantiate another object with a descriptive name to contain the other socket_). I'm not sure, but the question does raise suspicion these might even be global variables. (I hope that's not the case)
throw-ing raw integers, really?
You are risking full on data-races by destructing io_service while never checking that worker threads had completed.
More Undefined Behaviour here:
_sock.async_write_some(
ba::buffer(data.c_str(), data.size()),
You pass a reference to the parameter data which goes out of scope. When the async operation completes, it will be a dangling reference
There's some obvious copy/paste trouble going on here:
if (socket_1.get() == NULL || !socket_1->is_open())
{
WARNING << "serial_port_1 is not open";
return;
}
I'd actually say this stems from precisely the same source that lead to the variable names being serial_port_1 and socket_1
Some Cleanup
Simplify. There wasn't self-contained code, so nothing complete here, but at least see the many points of simplification:
Live On Coliru
#include <boost/asio.hpp>
#include <boost/thread.hpp>
#include <iostream>
namespace ba = boost::asio;
using ba::ip::tcp;
using boost::system::error_code;
#define ERROR std::cerr
#define WARNING std::cerr
#define INFO std::cerr
struct eth_socket {
~eth_socket() {
_work.reset();
if (_worker.joinable())
_worker.join(); // wait
}
void open(std::string address);
void write_data(std::string data);
private:
void connected(error_code error) {
if (error)
ERROR << "Connect failed: " << error << "\n";
else
INFO << "Connected to " << _sock.remote_endpoint() << "\n";
}
void written(error_code error, size_t bytes_transferred);
private:
ba::io_service _svc;
boost::optional<ba::io_service::work> _work{ _svc };
boost::thread _worker{ [this] { _svc.run(); } };
std::string _data;
unsigned short _port = 6767;
tcp::socket _sock{ _svc };
};
void eth_socket::open(std::string address) {
tcp::endpoint remote_endpoint(ba::ip::address::from_string(address), _port);
_sock.async_connect(remote_endpoint, boost::bind(&eth_socket::connected, this, _1));
}
void eth_socket::write_data(std::string data) {
_data = data;
_sock.async_write_some(ba::buffer(_data), boost::bind(&eth_socket::written, this, _1, _2));
}
void eth_socket::written(error_code error, size_t bytes_transferred) {
INFO << "data written to " << _sock.remote_endpoint() << " " << error.message() << ";"
<< "bytes_transferred = " << bytes_transferred << "\n";
}
int main() {
{
eth_socket s;
s.open("127.0.0.1");
s.write_data("Hello"); // callback called
s.write_data("Hello"); // callback called
s.write_data("Hello"); // callback called
boost::this_thread::sleep_for(boost::chrono::seconds(1));
s.write_data("Hello"); // callback not called after sleep
} // orderly worker thread join here
}

My problems are now fixed thanks to sehe's help and prayer.
This line in open_eth_socket:
boost::thread t(boost::bind(&boost::asio::io_service::run, *&io_service_1));
is now this:
boost::shared_ptr <boost::thread> io_service_1_thread; // in header
if (io_service_1_thread.get()) io_service_1_thread->interrupt();
io_service_1_thread.reset(new boost::thread (boost::bind(&eth_socket::run_io_service_1, this)));
I added this function:
void eth_socket::run_io_service_1 (void)
{
while (true) // work forever
{
boost::asio::io_service::work work(*io_service_1);
io_service_1->run();
io_service_1->reset(); // not sure if this will cause problems yet
INFO << "io_service_1 run complete";
boost::this_thread::sleep (boost::posix_time::milliseconds (100));
}
return;
}

Related

Get notification in Asio if `dispatched` or `post` have finished

I want to know when dispatchhas finished with some specific work
service.dispatch(&some_work);
I want to know this because I need to restart some_work if it has finished.
struct work
{
std::shared_ptr<asio::io_service> io_service;
bool ready;
std::mutex m;
template <class F>
void do_some_work(F&& f)
{
if (io_service && ready) {
m.lock();
ready = false;
m.unlock();
io_service->dispatch([&f, this]() {
f();
m.lock();
ready = true;
m.unlock();
});
}
}
work(std::shared_ptr<asio::io_service> io_service)
: io_service(io_service)
, ready(true)
{
}
};
int
main()
{
auto service = std::make_shared<asio::io_service>();
auto w = std::make_shared<asio::io_service::work>(*service);
std::thread t1([&] { service->run(); });
work some_work{ service };
for (;;) {
some_work.do_some_work([] {
std::cout << "Start long draw on thread: " << std::this_thread::get_id()
<< std::endl;
std::this_thread::sleep_for(std::chrono::seconds(5));
std::cout << "End long draw on thread: " << std::this_thread::get_id()
<< std::endl;
});
}
w.reset();
t1.join();
}
There are some problems with the code, for example if some_workgoes out of scope, then the running taskwould still write to ready.
I am wondering if something like this already exists in Asio?
For lifetime issues, the common idiom is indeed to use shared pointers, examples:
Ensure no new wait is accepted by boost::deadline_timer unless previous wait is expired
Boost::Asio Async write failed
Other than that, the completion handler is already that event. So you would do:
void my_async_loop() {
auto This = shared_from_this();
socket_.async_read(buffer(m_buffer, ...,
[=,This](error_code ec, size_t transferred) {
if (!ec) {
// do something
my_async_loop();
}
}
);
}
This will re-schedule an (other?) async operation once the previous has completed.
On the subject of threadsafety, see Why do I need strand per connection when using boost::asio?

set_option: Invalid argument when setting option boost::asio::ip::multicast::join_group inside lambda

This code is intended to receive UDP multicast messages using Boost.Asio. A Boost system_error exception is thrown by the code below when the second set_option() call inside receiver's constructor is made (to join the multicast group). The complaint is "Invalid argument". This seems to be related to the fact that the constructor occurs inside a lambda defined inside IO::doIO(), because using a member for the std::thread with identical functionality (IO::threadFunc()) instead results in the expected behavior (no exceptions thrown).
Why is this, and how can I fix it so that I may use a lambda?
//g++ -std=c++11 doesntWork.cc -lboost_system -lpthread
#include <iostream>
#include <thread>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
class IO
{
public:
class receiver
{
public:
receiver(
boost::asio::io_service &io_service,
const boost::asio::ip::address &multicast_address,
const unsigned short portNumber) : _socket(io_service)
{
const boost::asio::ip::udp::endpoint listen_endpoint(
boost::asio::ip::address::from_string("0.0.0.0"), portNumber);
_socket.open(listen_endpoint.protocol());
_socket.set_option(boost::asio::ip::udp::socket::reuse_address(true));
_socket.bind(listen_endpoint);
std::cerr << " About to set option join_group" << std::endl;
_socket.set_option(boost::asio::ip::multicast::join_group(
multicast_address));
_socket.async_receive_from(
boost::asio::buffer(_data),
_sender_endpoint,
boost::bind(&receiver::handle_receive_from, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
private:
void handle_receive_from(
const boost::system::error_code &error,
const size_t bytes_recvd)
{
if (!error)
{
for(const auto &c : _data)
std::cout << c;
std::cout << std::endl;
}
}
private:
boost::asio::ip::udp::socket _socket;
boost::asio::ip::udp::endpoint _sender_endpoint;
std::vector<unsigned char> _data;
}; // receiver class
void doIO()
{
const boost::asio::ip::address multicast_address =
boost::asio::ip::address::from_string("235.0.0.1");
const unsigned short portNumber = 9999;
// _io_service_thread = std::thread(
// &IO::threadFunc, this, multicast_address, portNumber);
_io_service_thread = std::thread([&, this]{
try {
// Construct an asynchronous receiver
receiver r(_io_service, multicast_address, portNumber);
// Now run the IO service
_io_service.run();
}
catch(const boost::system::system_error &e)
{
std::cerr << e.what() << std::endl;
throw e; //std::terminate()
}
});
}
void threadFunc(
const boost::asio::ip::address &multicast_address,
const unsigned short portNumber)
{
try {
// Construct an asynchronous receiver
receiver r(_io_service, multicast_address, portNumber);
// Now run the IO service
_io_service.run();
}
catch(const boost::system::system_error &e)
{
std::cerr << e.what() << std::endl;
throw e; //std::terminate()
}
}
private:
boost::asio::io_service _io_service;
std::thread _io_service_thread;
}; // IO class
int main()
{
IO io;
io.doIO();
std::cout << "IO Service is running" << std::endl;
sleep(9999);
}
There is a race condition that can result in dangling references being accessed, invoking undefined behavior. The lambda capture-list is capturing the automatic variables, multicast_address and portNumber, by reference. However, the lifetime of these objects may end before their usage within _io_service_thread:
void doIO()
{
const boost::asio::ip::address multicast_address = /* ... */;
const unsigned short portNumber = /* ... */;
_io_service_thread = std::thread([&, this] {
// multicast_address and portNumber's lifetime may have already ended.
receiver r(_io_service, multicast_address, portNumber);
// ...
});
} // multicast_address and portNumber are destroyed.
To resolve this, consider capturing by value so that the thread operates on copies whose lifetimes will remain valid until the end of the thread. Change:
std::thread([&, this] { /* ... */ }
to:
std::thread([=] { /* ... */ }
This issue does not present itself when std::thread is constructed with the function and all its arguments, as the std::thread constructor will copy/move all provided arguments into thread-accessible storage.
Also, be aware of the destruction of the _io_service_thread object will invoke std::terminate() if it is still joinable within IO's destructor. To avoid this behavior, consider explicitly joining the _io_service_thread from the main thread.

Boost:condition_variable.notify_one() causes segmentation fault 11 exception

I'm trying to run an example of websocket++ that consists in receive messages from websocket clients and broadcast to all connected clients, but i having problems with thread synchronization.
In the code example the method process_messages waits for message on a std:queue
boost::unique_lock<boost::mutex> lock(m_action_lock);
while(m_actions.empty()) {
m_action_cond.wait(lock);
}
And the on_message handler locks the queue before to push a new message received from client, but when it try to notify_one(), the program fail with an Segmentation fault 11.
void on_message(connection_hdl hdl, server::message_ptr msg) {
// queue message up for sending by processing thread
{
boost::unique_lock<boost::mutex> lock(m_action_lock);
m_actions.push(action(MESSAGE,msg));
lock.unlock();
}
m_action_cond.notify_one();
}
The only way that the program works is commenting the wait(lock) but i not sure if this is safe.
Some body could help me to find de segmentation fault cause?
The complete code is:
#include <websocketpp/config/asio_no_tls.hpp>
#include <websocketpp/server.hpp>
#include <iostream>
#include <boost/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/thread/condition_variable.hpp>
typedef websocketpp::server<websocketpp::config::asio> server;
using websocketpp::connection_hdl;
using websocketpp::lib::placeholders::_1;
using websocketpp::lib::placeholders::_2;
using websocketpp::lib::bind;
/* on_open insert connection_hdl into channel
* on_close remove connection_hdl from channel
* on_message queue send to all channels
*/
enum action_type {
SUBSCRIBE,
UNSUBSCRIBE,
MESSAGE
};
struct action {
action(action_type t, connection_hdl h) : type(t), hdl(h) {}
action(action_type t, server::message_ptr m) : type(t), msg(m) {}
action_type type;
websocketpp::connection_hdl hdl;
server::message_ptr msg;
};
class broadcast_server {
public:
broadcast_server() {
// Initialize Asio Transport
m_server.init_asio();
// Register handler callbacks
m_server.set_open_handler(bind(&broadcast_server::on_open,this,::_1));
m_server.set_close_handler(bind(&broadcast_server::on_close,this,::_1));
m_server.set_message_handler(bind(&broadcast_server::on_message,this,::_1,::_2));
}
void run(uint16_t port) {
// listen on specified port
m_server.listen(port);
// Start the server accept loop
m_server.start_accept();
// Start the ASIO io_service run loop
try {
m_server.run();
} catch (const std::exception & e) {
std::cout << e.what() << std::endl;
} catch (websocketpp::lib::error_code e) {
std::cout << e.message() << std::endl;
} catch (...) {
std::cout << "other exception" << std::endl;
}
}
void on_open(connection_hdl hdl) {
boost::unique_lock<boost::mutex> lock(m_action_lock);
//std::cout << "on_open" << std::endl;
m_actions.push(action(SUBSCRIBE,hdl));
lock.unlock();
m_action_cond.notify_one();
}
void on_close(connection_hdl hdl) {
boost::unique_lock<boost::mutex> lock(m_action_lock);
//std::cout << "on_close" << std::endl;
m_actions.push(action(UNSUBSCRIBE,hdl));
lock.unlock();
m_action_cond.notify_one();
}
void on_message(connection_hdl hdl, server::message_ptr msg) {
// queue message up for sending by processing thread
boost::unique_lock<boost::mutex> lock(m_action_lock);
//std::cout << "on_message" << std::endl;
m_actions.push(action(MESSAGE,msg));
lock.unlock();
m_action_cond.notify_one();
}
void process_messages() {
while(1) {
boost::unique_lock<boost::mutex> lock(m_action_lock);
while(m_actions.empty()) {
m_action_cond.wait(lock);
}
action a = m_actions.front();
m_actions.pop();
lock.unlock();
if (a.type == SUBSCRIBE) {
boost::unique_lock<boost::mutex> lock(m_connection_lock);
m_connections.insert(a.hdl);
} else if (a.type == UNSUBSCRIBE) {
boost::unique_lock<boost::mutex> lock(m_connection_lock);
m_connections.erase(a.hdl);
} else if (a.type == MESSAGE) {
boost::unique_lock<boost::mutex> lock(m_connection_lock);
con_list::iterator it;
for (it = m_connections.begin(); it != m_connections.end(); ++it) {
m_server.send(*it,a.msg);
}
} else {
// undefined.
}
}
}
private:
typedef std::set<connection_hdl,std::owner_less<connection_hdl>> con_list;
server m_server;
con_list m_connections;
std::queue<action> m_actions;
boost::mutex m_action_lock;
boost::mutex m_connection_lock;
boost::condition_variable m_action_cond;
};
int main() {
broadcast_server server;
// Start a thread to run the processing loop
boost::thread(bind(&broadcast_server::process_messages,&server));
// Run the asio loop with the main thread
server.run(9002);
}
I can reproduce this behavior when Boost is compiled using g++ and libstdc++ but the program linking to it is compiled using clang and libc++. The libstdc++ and libc++ standard libraries are not ABI compatible, so you will need to build everything with one or everything with the other.
Details on how to compile Boost in C++11 mode with clang/libc++:
How to compile/link Boost with clang++/libc++?

Interrupting a boost::thread while interruptions are disabled

While using boost::threads I have come across this interruption problem. When I do a boost::thread_interrupt from thread A on thread B, while B has interrupts disabled (boost::this_thread::disable_interrupts di), the interrupt seems to be lost.
That is to say, that if I put a boost::thread::interruption_point() after interruption has been enabled, it does not throw the boost::thread_interrupted exception.
Is this the expected behavior or am I doing something wrong?
Thanks
Nothing in the documentation says that interruptions are re-triggered when thread B re-enables interruptions. I've tried a simple test program and can confirm the behaviour you describe.
After re-enabling interruptions, you can check this_thread::interruption_requested() to see if there was an interruption requested while interruptions were disabled. If an interruption was indeed requested, you can throw a thread_interrupted exception yourself.
Here's a working program that demonstrates this:
#include <boost/thread.hpp>
#include <iostream>
using namespace std;
using namespace boost;
void threadB()
{
int ticks = 0;
this_thread::disable_interruption* disabler = 0;
try
{
while (ticks < 20)
{
if (ticks == 5)
{
cout << "Disabling interruptions\n";
disabler = new this_thread::disable_interruption;
}
if (ticks == 15)
{
cout << "Re-enabling interruptions\n";
delete disabler;
if (this_thread::interruption_requested())
{
cout << "Interrupt requested while disabled\n";
throw thread_interrupted();
}
}
cout << "Tick " << ticks << "\n";
thread::sleep(get_system_time() + posix_time::milliseconds(100));
++ticks;
}
}
catch (thread_interrupted)
{
cout << "Interrupted\n";
}
}
int main()
{
thread b(&threadB);
thread::sleep(get_system_time() + posix_time::milliseconds(1000));
b.interrupt();
cout << "main -> Interrupt!\n";
b.join();
}
Hope this helps.

Side effects of global static variables

I'm writing a UDP server that currently receives data from UDP wraps it up in an object and places them into a concurrent queue. The concurrent queue is the implementation provided here: http://www.justsoftwaresolutions.co.uk/threading/implementing-a-thread-safe-queue-using-condition-variables.html
A pool of worker threads pull data out of the queue for processing.
The queue is defined globally as:
static concurrent_queue<boost::shared_ptr<Msg> > g_work_queue_;
Now the problem I'm having is that if I simply write a function to produce data and insert it into the queue and create some consumer threads to pull them out it works fine.
But the moment I add my UDP based producer the worker threads stop being notified of the arrival of data on the queue.
I've tracked the issue down to the end of the push function in concurrent_queue.
Specifically the line: the_condition_variable.notify_one();
Does not return when using my network code.
So the problem is related to the way I've written the networking code.
Here is what it looks like.
enum
{
MAX_LENGTH = 1500
};
class Msg
{
public:
Msg()
{
static int i = 0;
i_ = i++;
printf("Construct ObbsMsg: %d\n", i_);
}
~Msg()
{
printf("Destruct ObbsMsg: %d\n", i_);
}
const char* toString() { return data_; }
private:
friend class server;
udp::endpoint sender_endpoint_;
char data_[MAX_LENGTH];
int i_;
};
class server
{
public:
server::server(boost::asio::io_service& io_service)
: io_service_(io_service),
socket_(io_service, udp::endpoint(udp::v4(), PORT))
{
waitForNextMessage();
}
void server::waitForNextMessage()
{
printf("Waiting for next msg\n");
next_msg_.reset(new Msg());
socket_.async_receive_from(
boost::asio::buffer(next_msg_->data_, MAX_LENGTH), sender_endpoint_,
boost::bind(&server::handleReceiveFrom, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void server::handleReceiveFrom(const boost::system::error_code& error, size_t bytes_recvd)
{
if (!error && bytes_recvd > 0) {
printf("got data: %s. Adding to work queue\n", next_msg_->toString());
g_work_queue.push(next_msg_); // Add received msg to work queue
waitForNextMessage();
} else {
waitForNextMessage();
}
}
private:
boost::asio::io_service& io_service_;
udp::socket socket_;
udp::endpoint sender_endpoint_;
boost::shared_ptr<Msg> next_msg_;
}
int main(int argc, char* argv[])
{
try{
boost::asio::io_service io_service;
server s(io_service);
io_service.run();
catch(std::exception& e){
std::err << "Exception: " << e.what() << std::endl;
}
return 0;
}
Now I've found that if handle_receive_from is able to return then notify_one() in concurrent_queue returns. So I think it's because I have a recursive loop.
So what's the correct way to start listening for new data? and is the async udp server example flawed as I based it off what they were already doing.
EDIT: Ok the issue just got even weirder.
What I haven't mentioned here is that I have a class called processor.
Processor looks like this:
class processor
{
public:
processor::processor(int thread_pool_size) :
thread_pool_size_(thread_pool_size) { }
void start()
{
boost::thread_group threads;
for (std::size_t i = 0; i < thread_pool_size_; ++i){
threads.create_thread(boost::bind(&ObbsServer::worker, this));
}
}
void worker()
{
while (true){
boost::shared_ptr<ObbsMsg> msg;
g_work_queue.wait_and_pop(msg);
printf("Got msg: %s\n", msg->toString());
}
}
private:
int thread_pool_size_;
};
Now it seems that if I extract the worker function out on it's own and start the threads
from main. it works! Can someone explain why a thread functions as I would expect outside of a class, but inside it's got side effects?
EDIT2: Now it's getting even weirder still
I pulled out two functions (exactly the same).
One is called consumer, the other worker.
i.e.
void worker()
{
while (true){
boost::shared_ptr<ObbsMsg> msg;
printf("waiting for msg\n");
g_work_queue.wait_and_pop(msg);
printf("Got msg: %s\n", msg->toString());
}
}
void consumer()
{
while (true){
boost::shared_ptr<ObbsMsg> msg;
printf("waiting for msg\n");
g_work_queue.wait_and_pop(msg);
printf("Got msg: %s\n", msg->toString());
}
}
Now, consumer lives at the top of the server.cpp file. I.e. where our server code lives as well.
On the other hand, worker lives in the processor.cpp file.
Now I'm not using processor at all at the moment. The main function now looks like this:
void consumer();
void worker();
int main(int argc, char* argv[])
{
try {
boost::asio::io_service io_service;
server net(io_service);
//processor s(7);
boost::thread_group threads;
for (std::size_t i = 0; i < 7; ++i){
threads.create_thread(worker); // this doesn't work
// threads.create_thread(consumer); // THIS WORKS!?!?!?
}
// s.start();
printf("Server Started...\n");
boost::asio::io_service::work work(io_service);
io_service.run();
printf("exiting...\n");
} catch (std::exception& e) {
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
Why is it that consumer is able to receive the queued items, but worker is not.
They are identical implementations with different names.
This isn't making any sense. Any ideas?
Here is the sample output when receiving the txt "Hello World":
Output 1: not working. When calling worker function or using the processor class.
Construct ObbsMsg: 0
waiting for msg
waiting for msg
waiting for msg
waiting for msg
waiting for msg
waiting for msg
Server Started...
waiting for msg
got data: hello world. Adding to work queue
Construct ObbsMsg: 1
Output 2: works when calling the consumer function which is identical to the worker function.
Construct ObbsMsg: 0
waiting for msg
waiting for msg
waiting for msg
waiting for msg
waiting for msg
waiting for msg
Server Started...
waiting for msg
got data: hello world. Adding to work queue
Construct ObbsMsg: 1
Got msg: hello world <----- this is what I've been wanting to see!
Destruct ObbsMsg: 0
waiting for msg
To answer my own question.
It seems the problem is to do with the declaration of g_work_queue;
Declared in a header file as: static concurrent_queue< boost::shared_ptr > g_work_queue;
It seems that declaring it static is not what I want to be doing.
Apparently that creates a separate queue object for each compiled .o file and obviously
separate locks etc.
This explains why when the queue was being manipulated inside the same source file
with a consumer and producer in the same file it worked.
But when in different files it did not because the threads were actually waiting on different objects.
So I've redeclared the work queue like so.
-- workqueue.h --
extern concurrent_queue< boost::shared_ptr<Msg> > g_work_queue;
-- workqueue.cpp --
#include "workqueue.h"
concurrent_queue< boost::shared_ptr<Msg> > g_work_queue;
Doing this fixes the problem.

Resources