Side effects of global static variables - boost

I'm writing a UDP server that currently receives data from UDP wraps it up in an object and places them into a concurrent queue. The concurrent queue is the implementation provided here: http://www.justsoftwaresolutions.co.uk/threading/implementing-a-thread-safe-queue-using-condition-variables.html
A pool of worker threads pull data out of the queue for processing.
The queue is defined globally as:
static concurrent_queue<boost::shared_ptr<Msg> > g_work_queue_;
Now the problem I'm having is that if I simply write a function to produce data and insert it into the queue and create some consumer threads to pull them out it works fine.
But the moment I add my UDP based producer the worker threads stop being notified of the arrival of data on the queue.
I've tracked the issue down to the end of the push function in concurrent_queue.
Specifically the line: the_condition_variable.notify_one();
Does not return when using my network code.
So the problem is related to the way I've written the networking code.
Here is what it looks like.
enum
{
MAX_LENGTH = 1500
};
class Msg
{
public:
Msg()
{
static int i = 0;
i_ = i++;
printf("Construct ObbsMsg: %d\n", i_);
}
~Msg()
{
printf("Destruct ObbsMsg: %d\n", i_);
}
const char* toString() { return data_; }
private:
friend class server;
udp::endpoint sender_endpoint_;
char data_[MAX_LENGTH];
int i_;
};
class server
{
public:
server::server(boost::asio::io_service& io_service)
: io_service_(io_service),
socket_(io_service, udp::endpoint(udp::v4(), PORT))
{
waitForNextMessage();
}
void server::waitForNextMessage()
{
printf("Waiting for next msg\n");
next_msg_.reset(new Msg());
socket_.async_receive_from(
boost::asio::buffer(next_msg_->data_, MAX_LENGTH), sender_endpoint_,
boost::bind(&server::handleReceiveFrom, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void server::handleReceiveFrom(const boost::system::error_code& error, size_t bytes_recvd)
{
if (!error && bytes_recvd > 0) {
printf("got data: %s. Adding to work queue\n", next_msg_->toString());
g_work_queue.push(next_msg_); // Add received msg to work queue
waitForNextMessage();
} else {
waitForNextMessage();
}
}
private:
boost::asio::io_service& io_service_;
udp::socket socket_;
udp::endpoint sender_endpoint_;
boost::shared_ptr<Msg> next_msg_;
}
int main(int argc, char* argv[])
{
try{
boost::asio::io_service io_service;
server s(io_service);
io_service.run();
catch(std::exception& e){
std::err << "Exception: " << e.what() << std::endl;
}
return 0;
}
Now I've found that if handle_receive_from is able to return then notify_one() in concurrent_queue returns. So I think it's because I have a recursive loop.
So what's the correct way to start listening for new data? and is the async udp server example flawed as I based it off what they were already doing.
EDIT: Ok the issue just got even weirder.
What I haven't mentioned here is that I have a class called processor.
Processor looks like this:
class processor
{
public:
processor::processor(int thread_pool_size) :
thread_pool_size_(thread_pool_size) { }
void start()
{
boost::thread_group threads;
for (std::size_t i = 0; i < thread_pool_size_; ++i){
threads.create_thread(boost::bind(&ObbsServer::worker, this));
}
}
void worker()
{
while (true){
boost::shared_ptr<ObbsMsg> msg;
g_work_queue.wait_and_pop(msg);
printf("Got msg: %s\n", msg->toString());
}
}
private:
int thread_pool_size_;
};
Now it seems that if I extract the worker function out on it's own and start the threads
from main. it works! Can someone explain why a thread functions as I would expect outside of a class, but inside it's got side effects?
EDIT2: Now it's getting even weirder still
I pulled out two functions (exactly the same).
One is called consumer, the other worker.
i.e.
void worker()
{
while (true){
boost::shared_ptr<ObbsMsg> msg;
printf("waiting for msg\n");
g_work_queue.wait_and_pop(msg);
printf("Got msg: %s\n", msg->toString());
}
}
void consumer()
{
while (true){
boost::shared_ptr<ObbsMsg> msg;
printf("waiting for msg\n");
g_work_queue.wait_and_pop(msg);
printf("Got msg: %s\n", msg->toString());
}
}
Now, consumer lives at the top of the server.cpp file. I.e. where our server code lives as well.
On the other hand, worker lives in the processor.cpp file.
Now I'm not using processor at all at the moment. The main function now looks like this:
void consumer();
void worker();
int main(int argc, char* argv[])
{
try {
boost::asio::io_service io_service;
server net(io_service);
//processor s(7);
boost::thread_group threads;
for (std::size_t i = 0; i < 7; ++i){
threads.create_thread(worker); // this doesn't work
// threads.create_thread(consumer); // THIS WORKS!?!?!?
}
// s.start();
printf("Server Started...\n");
boost::asio::io_service::work work(io_service);
io_service.run();
printf("exiting...\n");
} catch (std::exception& e) {
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
Why is it that consumer is able to receive the queued items, but worker is not.
They are identical implementations with different names.
This isn't making any sense. Any ideas?
Here is the sample output when receiving the txt "Hello World":
Output 1: not working. When calling worker function or using the processor class.
Construct ObbsMsg: 0
waiting for msg
waiting for msg
waiting for msg
waiting for msg
waiting for msg
waiting for msg
Server Started...
waiting for msg
got data: hello world. Adding to work queue
Construct ObbsMsg: 1
Output 2: works when calling the consumer function which is identical to the worker function.
Construct ObbsMsg: 0
waiting for msg
waiting for msg
waiting for msg
waiting for msg
waiting for msg
waiting for msg
Server Started...
waiting for msg
got data: hello world. Adding to work queue
Construct ObbsMsg: 1
Got msg: hello world <----- this is what I've been wanting to see!
Destruct ObbsMsg: 0
waiting for msg

To answer my own question.
It seems the problem is to do with the declaration of g_work_queue;
Declared in a header file as: static concurrent_queue< boost::shared_ptr > g_work_queue;
It seems that declaring it static is not what I want to be doing.
Apparently that creates a separate queue object for each compiled .o file and obviously
separate locks etc.
This explains why when the queue was being manipulated inside the same source file
with a consumer and producer in the same file it worked.
But when in different files it did not because the threads were actually waiting on different objects.
So I've redeclared the work queue like so.
-- workqueue.h --
extern concurrent_queue< boost::shared_ptr<Msg> > g_work_queue;
-- workqueue.cpp --
#include "workqueue.h"
concurrent_queue< boost::shared_ptr<Msg> > g_work_queue;
Doing this fixes the problem.

Related

What is the most efficient ZeroMQ polling on a single PUB/SUB socket?

The ZeroMQ documentation mentions a zmq_poll as a method for multi-plexing multiple sockets on a single thread. Is there any benefit to polling in a thread that simply consumes data from one socket? Or should I just use zmq_recv?
For example:
/* POLLING A SINGLE SOCKET */
while (true) {
zmq::poll(&items[0], 1, -1);
if (items[0].revents & ZMQ_POLLIN) {
int size = zmq_recv(receiver, msg, 255, 0);
if (size != -1) {
// do something with msg
}
}
}
vs.
/* NO POLLING AND BLOCKING RECV */
while (true) {
int size = zmq_recv(receiver, msg, 255, 0);
if (size != -1) {
// do something with msg
}
}
Is there ever a situation to prefer the version with polling, or should I only use it for multi-plexing? Does polling result in more efficient CPU usage? Does the answer depend on the rate of messages being received?
*** Editing this post to include a toy example ***
The reason for asking this question is that I have observed that I can achieve a much higher throughput on my subscriber if I do not poll (more than an order of magnitude)
#include <thread>
#include <zmq.hpp>
#include <iostream>
#include <unistd.h>
#include <chrono>
using msg_t = char[88];
using timepoint_t = std::chrono::high_resolution_clock::time_point;
using milliseconds = std::chrono::milliseconds;
using microseconds = std::chrono::microseconds;
/* Log stats about how many packets were sent/received */
class SocketStats {
public:
SocketStats(const std::string& name) : m_socketName(name), m_timePrev(now()) {}
void update() {
m_numPackets++;
timepoint_t timeNow = now();
if (duration(timeNow, m_timePrev) > m_logIntervalMs) {
uint64_t packetsPerSec = m_numPackets - m_numPacketsPrev;
std::cout << m_socketName << " : " << "processed " << (packetsPerSec) << " packets" << std::endl;
m_numPacketsPrev = m_numPackets;
m_timePrev = timeNow;
}
}
private:
timepoint_t now() { return std::chrono::steady_clock::now(); }
static milliseconds duration(timepoint_t timeNow, timepoint_t timePrev) {
return std::chrono::duration_cast<milliseconds>(timeNow - timePrev);
}
timepoint_t m_timePrev;
uint64_t m_numPackets = 0;
uint64_t m_numPacketsPrev = 0;
milliseconds m_logIntervalMs = milliseconds{1000};
const std::string m_socketName;
};
/* non-polling subscriber uses blocking receive and no poll */
void startNonPollingSubscriber(){
SocketStats subStats("NonPollingSubscriber");
zmq::context_t ctx(1);
zmq::socket_t sub(ctx, ZMQ_SUB);
sub.connect("tcp://127.0.0.1:5602");
sub.setsockopt(ZMQ_SUBSCRIBE, "", 0);
while (true) {
zmq::message_t msg;
bool success = sub.recv(&msg);
if (success) { subStats.update(); }
}
}
/* polling subscriber receives messages when available */
void startPollingSubscriber(){
SocketStats subStats("PollingSubscriber");
zmq::context_t ctx(1);
zmq::socket_t sub(ctx, ZMQ_SUB);
sub.connect("tcp://127.0.0.1:5602");
sub.setsockopt(ZMQ_SUBSCRIBE, "", 0);
zmq::pollitem_t items [] = {{static_cast<void*>(sub), 0, ZMQ_POLLIN, 0 }};
while (true) {
zmq::message_t msg;
int rc = zmq::poll (&items[0], 1, -1);
if (rc < 1) { continue; }
if (items[0].revents & ZMQ_POLLIN) {
bool success = sub.recv(&msg, ZMQ_DONTWAIT);
if (success) { subStats.update(); }
}
}
}
void startFastPublisher() {
SocketStats pubStats("FastPublisher");
zmq::context_t ctx(1);
zmq::socket_t pub(ctx, ZMQ_PUB);
pub.bind("tcp://127.0.0.1:5602");
while (true) {
msg_t mymessage;
zmq::message_t msg(sizeof(msg_t));
memcpy((char *)msg.data(), (void*)(&mymessage), sizeof(msg_t));
bool success = pub.send(&msg, ZMQ_DONTWAIT);
if (success) { pubStats.update(); }
}
}
int main() {
std::thread t_sub1(startPollingSubscriber);
sleep(1);
std::thread t_sub2(startNonPollingSubscriber);
sleep(1);
std::thread t_pub(startFastPublisher);
while(true) {
sleep(10);
}
}
Q : "Is there any benefit to polling in a thread that simply consumes data from one socket?"
Oh sure there is.
As a principal promoter of non-blocking designs, I always advocate to design zero-waiting .poll()-s before deciding on .recv()-calls.
Q : "Does polling result in more efficient CPU usage?"
A harder one, yet I love it:
This question is decidable in two distinct manners:
a) read the source-code of both the .poll()-method and the .recv()-method, as ported onto your target platform and guesstimate the costs of calling each v/s the other
b) test either of the use-cases inside your run-time ecosystem and have the hard facts micro-benchmarked in-vivo.
Either way, you see the difference.
What you cannot see ATM are the add-on costs and other impacts that appear once you try (or once you are forced to) extend the use-case so as to accomodate other properties, not included in either the former or the latter.
Here, my principal preference to use .poll() before deciding further, enables other priority-based re-ordering of actual .recv()-calls and other, higher level, decisions, that neither the source-code, nor the test could ever decide.
Do not hesitate to test first and if tests will seem to be inconclusive (on your scale of { low | ultra-low }-latency sensitivity), may deep into the source-code to see why.
Typically you will get the best performance by draining the socket before doing another poll. In other words, once poll returns, you read until read returns "no more data", at which point you call poll.
For an example, see https://github.com/nyfix/OZ/blob/4627b0364be80de4451bf1a80a26c00d0ba9310f/src/transport.c#L524
There are trade-offs with this approach (mentioned in the code comments), but if you're only reading from a single socket you can ignore these.

How to implement a connection pool with Boost asio?

I have thousands of concurrent client threads with C10K synchronized request, newing a tcp connect when request comes might be time-consuming, and it cannot be scalable if the number of concurrent threads goes up.
So I thought a shared connection(tcp) pool might be a good choice for me, all connects(tcp) within the pool are activated, all requests share the pool, pull out when need to send message, push in when receiving message done.
But how can I achieve that? By the way, my server side conforms a Mulit-threading Async mode.
The piece code for connection(client-side):
class SyncTCPClient {
public:
SyncTCPClient(const std::string &raw_ip_address, unsigned short port_num) :
m_ep(asio::ip::address::from_string(raw_ip_address), port_num), m_sock(m_ios) {
m_sock.open(m_ep.protocol());
}
void connect() {
m_sock.connect(m_ep);
}
//...
private:
asio::io_service m_ios;
asio::ip::tcp::endpoint m_ep;
asio::ip::tcp::socket m_sock;
};
The piece code for connection pool(client-side):
class ConnectionPool {
private:
std::queue<std::shared_ptr<SyncTCPClient>> pool;
std::mutex mtx;
std::condition_variable no_empty;
public:
ConnectionPool(int size) {
std::cout << "Constructor for connection pool." << std::endl;
for (int i = 0; i < size; i++) {
std::shared_ptr<SyncTCPClient> cliPtr = std::make_shared<SyncTCPClient>(raw_ip_address, port_num);
cliPtr->connect();
pool.push(cliPtr);
}
}
//...
};
The piece code for connection acceptance(server-side):
std::shared_ptr<asio::ip::tcp::socket> sock = std::make_shared<asio::ip::tcp::socket>(m_ios);
m_acceptor.async_accept(*sock.get(),
[this, sock](const boost::system::error_code& error) {
onAccept(error, sock);
}
);
Much appreciated-)

async_write_some callback not called after delay

My callback for async_write_some is not called after a one second sleep. If I am starting an io_service worker thread for every write, why is the callback not being called?
header
boost::system::error_code error_1;
boost::shared_ptr <boost::asio::io_service> io_service_1;
boost::shared_ptr <boost::asio::ip::tcp::socket> socket_1;
connect
void eth_socket::open_eth_socket (void)
{
// 1. reset io services
io_service_1.reset();
io_service_1 = boost::make_shared <boost::asio::io_service> ();
// 2. create endpoint
boost::asio::ip::tcp::endpoint remote_endpoint(
boost::asio::ip::address::from_string("10.0.0.3"),
socket_1_port
);
// 3. reset socket
socket_1.reset(new boost::asio::ip::tcp::socket(*io_service_1));
// 4. connect socket
socket_1->async_connect(remote_endpoint,
boost::bind(
&eth_socket::socket_1_connect_callback,
this, boost::asio::placeholders::error
)
);
// 5. start io_service_1 run thread after giving it work
boost::thread t(boost::bind(&boost::asio::io_service::run, *&io_service_1));
return;
}
write
void eth_socket::write_data (std::string data)
{
// 1. check socket status
if (!socket_1->is_open())
{
WARNING << "socket_1 is not open";
throw -3;
}
// 2. start asynchronous write
socket_1->async_write_some(
boost::asio::buffer(data.c_str(), data.size()),
boost::bind(
&eth_socket::socket_1_write_data_callback,
this, boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred
)
);
// 3. start io_service_1 run thread after giving it work
boost::thread t(boost::bind(&boost::asio::io_service::run, *&io_service_1));
return;
}
callback
void eth_socket::socket_1_write_data_callback (const boost::system::error_code& error, size_t bytes_transferred)
{
// 1. check for errors
if (error)
{
ERROR << "error.message() >> " << error.message().c_str();
return;
}
if (socket_1.get() == NULL || !socket_1->is_open())
{
WARNING << "serial_port_1 is not open";
return;
}
INFO << "data written to 10.0.0.3:1337 succeeded; bytes_transferred = " << bytes_transferred;
return;
}
test
open_eth_socket();
write_data("Hello"); // callback called
write_data("Hello"); // callback called
write_data("Hello"); // callback called
sleep(1);
write_data("Hello"); // callback not called after sleep
boost::thread t(boost::bind(&boost::asio::io_service::run, *&io_service_1));
That's weird for a number of reasons.
You should not "run" io_services for each operation. Instead, run them steadily while operations may be posted. Optionally use io_service::work to prevent run from returning.
You should not (have to) create threads for each operation. If anything, it's recipe for synchronization issues (Why do I need strand per connection when using boost::asio?)
When running io_service again after it returned (without error) you should call reset() first, as per documentation (Why must io_service::reset() be called?)
You destruct a non-detached thread - likely before it had completed. If you had used std::thread this would even have caused immediate abnormal program termination. It's bad practice to not-join non-detached threads (and I'd add it's iffy to use detached threads without explicit synchronization on thread termination). See Why is destructor of boost::thread detaching joinable thread instead of calling terminate() as standard suggests?
I'd add to these top-level concerns
the smell from using names like socket_1 (just call it socket_ and instantiate another object with a descriptive name to contain the other socket_). I'm not sure, but the question does raise suspicion these might even be global variables. (I hope that's not the case)
throw-ing raw integers, really?
You are risking full on data-races by destructing io_service while never checking that worker threads had completed.
More Undefined Behaviour here:
_sock.async_write_some(
ba::buffer(data.c_str(), data.size()),
You pass a reference to the parameter data which goes out of scope. When the async operation completes, it will be a dangling reference
There's some obvious copy/paste trouble going on here:
if (socket_1.get() == NULL || !socket_1->is_open())
{
WARNING << "serial_port_1 is not open";
return;
}
I'd actually say this stems from precisely the same source that lead to the variable names being serial_port_1 and socket_1
Some Cleanup
Simplify. There wasn't self-contained code, so nothing complete here, but at least see the many points of simplification:
Live On Coliru
#include <boost/asio.hpp>
#include <boost/thread.hpp>
#include <iostream>
namespace ba = boost::asio;
using ba::ip::tcp;
using boost::system::error_code;
#define ERROR std::cerr
#define WARNING std::cerr
#define INFO std::cerr
struct eth_socket {
~eth_socket() {
_work.reset();
if (_worker.joinable())
_worker.join(); // wait
}
void open(std::string address);
void write_data(std::string data);
private:
void connected(error_code error) {
if (error)
ERROR << "Connect failed: " << error << "\n";
else
INFO << "Connected to " << _sock.remote_endpoint() << "\n";
}
void written(error_code error, size_t bytes_transferred);
private:
ba::io_service _svc;
boost::optional<ba::io_service::work> _work{ _svc };
boost::thread _worker{ [this] { _svc.run(); } };
std::string _data;
unsigned short _port = 6767;
tcp::socket _sock{ _svc };
};
void eth_socket::open(std::string address) {
tcp::endpoint remote_endpoint(ba::ip::address::from_string(address), _port);
_sock.async_connect(remote_endpoint, boost::bind(&eth_socket::connected, this, _1));
}
void eth_socket::write_data(std::string data) {
_data = data;
_sock.async_write_some(ba::buffer(_data), boost::bind(&eth_socket::written, this, _1, _2));
}
void eth_socket::written(error_code error, size_t bytes_transferred) {
INFO << "data written to " << _sock.remote_endpoint() << " " << error.message() << ";"
<< "bytes_transferred = " << bytes_transferred << "\n";
}
int main() {
{
eth_socket s;
s.open("127.0.0.1");
s.write_data("Hello"); // callback called
s.write_data("Hello"); // callback called
s.write_data("Hello"); // callback called
boost::this_thread::sleep_for(boost::chrono::seconds(1));
s.write_data("Hello"); // callback not called after sleep
} // orderly worker thread join here
}
My problems are now fixed thanks to sehe's help and prayer.
This line in open_eth_socket:
boost::thread t(boost::bind(&boost::asio::io_service::run, *&io_service_1));
is now this:
boost::shared_ptr <boost::thread> io_service_1_thread; // in header
if (io_service_1_thread.get()) io_service_1_thread->interrupt();
io_service_1_thread.reset(new boost::thread (boost::bind(&eth_socket::run_io_service_1, this)));
I added this function:
void eth_socket::run_io_service_1 (void)
{
while (true) // work forever
{
boost::asio::io_service::work work(*io_service_1);
io_service_1->run();
io_service_1->reset(); // not sure if this will cause problems yet
INFO << "io_service_1 run complete";
boost::this_thread::sleep (boost::posix_time::milliseconds (100));
}
return;
}

Get notification in Asio if `dispatched` or `post` have finished

I want to know when dispatchhas finished with some specific work
service.dispatch(&some_work);
I want to know this because I need to restart some_work if it has finished.
struct work
{
std::shared_ptr<asio::io_service> io_service;
bool ready;
std::mutex m;
template <class F>
void do_some_work(F&& f)
{
if (io_service && ready) {
m.lock();
ready = false;
m.unlock();
io_service->dispatch([&f, this]() {
f();
m.lock();
ready = true;
m.unlock();
});
}
}
work(std::shared_ptr<asio::io_service> io_service)
: io_service(io_service)
, ready(true)
{
}
};
int
main()
{
auto service = std::make_shared<asio::io_service>();
auto w = std::make_shared<asio::io_service::work>(*service);
std::thread t1([&] { service->run(); });
work some_work{ service };
for (;;) {
some_work.do_some_work([] {
std::cout << "Start long draw on thread: " << std::this_thread::get_id()
<< std::endl;
std::this_thread::sleep_for(std::chrono::seconds(5));
std::cout << "End long draw on thread: " << std::this_thread::get_id()
<< std::endl;
});
}
w.reset();
t1.join();
}
There are some problems with the code, for example if some_workgoes out of scope, then the running taskwould still write to ready.
I am wondering if something like this already exists in Asio?
For lifetime issues, the common idiom is indeed to use shared pointers, examples:
Ensure no new wait is accepted by boost::deadline_timer unless previous wait is expired
Boost::Asio Async write failed
Other than that, the completion handler is already that event. So you would do:
void my_async_loop() {
auto This = shared_from_this();
socket_.async_read(buffer(m_buffer, ...,
[=,This](error_code ec, size_t transferred) {
if (!ec) {
// do something
my_async_loop();
}
}
);
}
This will re-schedule an (other?) async operation once the previous has completed.
On the subject of threadsafety, see Why do I need strand per connection when using boost::asio?

boost::asio tcp async_read never returns

I am trying to convert some existing code to use boost's asio tcp sockets instead of our current implementation. I am able to get a very similar example (of a chat client/server) from the boost site working, but when I attempt to put the code into my own program it stops working.
What I am doing:
Start a server process
The server process makes an empty socket and uses it to listen (with a tcp::acceptor) for TCP connections on a port (10010 for example)
Start a client process
Have the client process create a socket connect to the server's port
When the server sees a client is connecting, it starts listening for data(with async_read) on the socket and creates another empty socket to listen for another TCP connection on the port
When the client sees that the server has connected, it sends 100 bytes of data (with async_write) and waits for the socket to tell it the send is finished...when that happens it prints a message and shuts down
When the server gets notified that its has data that has been read, it prints a message and shuts down
Obviously, I have greatly trimmed this code down from what I'm trying to implement, this is as small as I could make something that reproduces the problem. I'm running on windows and have a visual studio solution file you can get. There's some memory leaks, thread safety problems, and such, but that's because I'm taking stuff out of existing code, so don't worry about them.
Anyway, here's the files one header with some common stuff, a server, and a client.
Connection.hpp:
#ifndef CONNECTION_HPP
#define CONNECTION_HPP
#include
#include
#include
class ConnectionTransfer
{
public:
ConnectionTransfer(char* buffer, unsigned int size) :
buffer_(buffer), size_(size) {
}
virtual ~ConnectionTransfer(void){}
char* GetBuffer(){return buffer_;}
unsigned int GetSize(){return size_;}
virtual void CallbackForFinished() = 0;
protected:
char* buffer_;
unsigned int size_;
};
class ConnectionTransferInProgress
{
public:
ConnectionTransferInProgress(ConnectionTransfer* ct):
ct_(ct)
{}
~ConnectionTransferInProgress(void){}
void operator()(const boost::system::error_code& error){Other(error);}
void Other(const boost::system::error_code& error){
if(!error)
ct_->CallbackForFinished();
}
private:
ConnectionTransfer* ct_;
};
class Connection
{
public:
Connection(boost::asio::io_service& io_service):
sock_(io_service)
{}
~Connection(void){}
void AsyncSend(ConnectionTransfer* ct){
ConnectionTransferInProgress tip(ct);
//sock_->async_send(boost::asio::buffer(ct->GetBuffer(),
// static_cast(ct->GetSize())), tip);
boost::asio::async_write(sock_, boost::asio::buffer(ct->GetBuffer(),
static_cast(ct->GetSize())), boost::bind(
&ConnectionTransferInProgress::Other, tip, boost::asio::placeholders::error));
}
void AsyncReceive(ConnectionTransfer* ct){
ConnectionTransferInProgress tip(ct);
//sock_->async_receive(boost::asio::buffer(ct->GetBuffer(),
// static_cast(ct->GetSize())), tip);
boost::asio::async_read(sock_, boost::asio::buffer(ct->GetBuffer(),
static_cast(ct->GetSize())), boost::bind(
&ConnectionTransferInProgress::Other, tip, boost::asio::placeholders::error));
}
boost::asio::ip::tcp::socket& GetSocket(){return sock_;}
private:
boost::asio::ip::tcp::socket sock_;
};
#endif //CONNECTION_HPP
BoostConnectionClient.cpp:
#include "Connection.hpp"
#include
#include
#include
#include
using namespace boost::asio::ip;
bool connected;
bool gotTransfer;
class FakeTransfer : public ConnectionTransfer
{
public:
FakeTransfer(char* buffer, unsigned int size) : ConnectionTransfer(buffer, size)
{
}
void CallbackForFinished()
{
gotTransfer = true;
}
};
void ConnectHandler(const boost::system::error_code& error)
{
if(!error)
connected = true;
}
int main(int argc, char* argv[])
{
connected = false;
gotTransfer = false;
boost::asio::io_service io_service;
Connection* conn = new Connection(io_service);
tcp::endpoint ep(address::from_string("127.0.0.1"), 10011);
conn->GetSocket().async_connect(ep, ConnectHandler);
boost::thread t(boost::bind(&boost::asio::io_service::run, &io_service));
while(!connected)
{
boost::this_thread::sleep(boost::posix_time::millisec(1));
}
std::cout (angle brackets here) "Connected\n";
char data[100];
FakeTransfer* ft = new FakeTransfer(data, 100);
conn->AsyncReceive(ft);
while(!gotTransfer)
{
boost::this_thread::sleep(boost::posix_time::millisec(1));
}
std::cout (angle brackets here) "Done\n";
return 0;
}
BoostConnectionServer.cpp:
#include "Connection.hpp"
#include
#include
#include
#include
using namespace boost::asio::ip;
Connection* conn1;
bool conn1Done;
bool gotTransfer;
Connection* conn2;
class FakeAcceptor
{
public:
FakeAcceptor(boost::asio::io_service& io_service, const tcp::endpoint& endpoint)
:
io_service_(io_service),
acceptor_(io_service, endpoint)
{
conn1 = new Connection(io_service_);
acceptor_.async_accept(conn1->GetSocket(),
boost::bind(&FakeAcceptor::HandleAccept, this, conn1,
boost::asio::placeholders::error));
}
void HandleAccept(Connection* conn, const boost::system::error_code& error)
{
if(conn == conn1)
conn1Done = true;
conn2 = new Connection(io_service_);
acceptor_.async_accept(conn2->GetSocket(),
boost::bind(&FakeAcceptor::HandleAccept, this, conn2,
boost::asio::placeholders::error));
}
boost::asio::io_service& io_service_;
tcp::acceptor acceptor_;
};
class FakeTransfer : public ConnectionTransfer
{
public:
FakeTransfer(char* buffer, unsigned int size) : ConnectionTransfer(buffer, size)
{
}
void CallbackForFinished()
{
gotTransfer = true;
}
};
int main(int argc, char* argv[])
{
boost::asio::io_service io_service;
conn1Done = false;
gotTransfer = false;
tcp::endpoint endpoint(tcp::v4(), 10011);
FakeAcceptor fa(io_service, endpoint);
boost::thread t(boost::bind(&boost::asio::io_service::run, &io_service));
while(!conn1Done)
{
boost::this_thread::sleep(boost::posix_time::millisec(1));
}
std::cout (angle brackets here) "Accepted incoming connection\n";
char data[100];
FakeTransfer* ft = new FakeTransfer(data, 100);
conn1->AsyncReceive(ft);
while(!gotTransfer)
{
boost::this_thread::sleep(boost::posix_time::millisec(1));
}
std::cout (angle brackets here) "Success!\n";
return 0;
}
I've searched around a bit, but haven't had much luck. As far as I can tell, I'm almost exactly matching the sample, so it must be something small that I'm overlooking.
Thanks!
In your client code, your ConnectHandler() callback function just sets a value and then returns, without posting any more work to the io_service. At that point, that async_connect() operation is the only work associated with the io_service; so when ConnectHandler() returns, there is no more work associated with the io_service. Thus the background thread's call to io_service.run() returns, and the thread exits.
One potential option would be to call conn->AsyncReceive() from within ConnectHandler(), so that the async_read() gets called prior to the ConnectHandler() returning and thus the background thread's call to io_service.run() won't return.
Another option, the more trivial one, would be to instantiate an io_service::work instance prior to creating your thread to call io_service::run (technically, you could do this at any point prior to the io_service.run() call's returning):
...
// some point in the main() method, prior to creating the background thread
boost::asio::io_service::work work(io_service)
...
This is documented in the io_service documentation:
Stopping the io_service from running out of work
Some applications may need to prevent an io_service object's run() call from returning when there is no more work to do. For example, the io_service may be being run in a background thread that is launched prior to the application's asynchronous operations. The run() call may be kept running by creating an object of type io_service::work:
http://www.boost.org/doc/libs/1_43_0/doc/html/boost_asio/reference/io_service.html

Resources