I have thousands of concurrent client threads with C10K synchronized request, newing a tcp connect when request comes might be time-consuming, and it cannot be scalable if the number of concurrent threads goes up.
So I thought a shared connection(tcp) pool might be a good choice for me, all connects(tcp) within the pool are activated, all requests share the pool, pull out when need to send message, push in when receiving message done.
But how can I achieve that? By the way, my server side conforms a Mulit-threading Async mode.
The piece code for connection(client-side):
class SyncTCPClient {
public:
SyncTCPClient(const std::string &raw_ip_address, unsigned short port_num) :
m_ep(asio::ip::address::from_string(raw_ip_address), port_num), m_sock(m_ios) {
m_sock.open(m_ep.protocol());
}
void connect() {
m_sock.connect(m_ep);
}
//...
private:
asio::io_service m_ios;
asio::ip::tcp::endpoint m_ep;
asio::ip::tcp::socket m_sock;
};
The piece code for connection pool(client-side):
class ConnectionPool {
private:
std::queue<std::shared_ptr<SyncTCPClient>> pool;
std::mutex mtx;
std::condition_variable no_empty;
public:
ConnectionPool(int size) {
std::cout << "Constructor for connection pool." << std::endl;
for (int i = 0; i < size; i++) {
std::shared_ptr<SyncTCPClient> cliPtr = std::make_shared<SyncTCPClient>(raw_ip_address, port_num);
cliPtr->connect();
pool.push(cliPtr);
}
}
//...
};
The piece code for connection acceptance(server-side):
std::shared_ptr<asio::ip::tcp::socket> sock = std::make_shared<asio::ip::tcp::socket>(m_ios);
m_acceptor.async_accept(*sock.get(),
[this, sock](const boost::system::error_code& error) {
onAccept(error, sock);
}
);
Much appreciated-)
Related
I want to initiate the TLS connection only if the server supports the secure connection, to achieve this i have introduced the two message type between the server and client.
Client will send SECURECONNREQ msg to server after tcp connection establishment, if the server is configured to support the TLS it send SECURECONNRESP.
On receiving SECURECONNRESP the client will initiate the the TLS handshake by calling boost asynhandshake API but this API is not sending the correct packet(client hello). In the wireshark i could see it is sending the protocol version as TLS1.1 even if it configured for TLS1.2.
Note: SSL context object is prepared with TLS1.2, ciphers and related certs.
It looks like the asynhandshake will not work properly if there is some data exchange happens on the TCP link.
Is there extra step we need take on BOOST asio to make it work?
The secure connection establishment works perfectly fine with below implementation:
Initiating the TLS connection without sending any data on the TCP link.
The client will initiate the TLS after connection is established(no data transfered on this link) by calling asynhandshake, the server will call boost asynhandshake immediately after the connnection accept.
It's hard, or even impossible, to tell without seeing your actual code. But I thought it a nice challenge to write the quintessential PLAIN/TLS server/client in Asio, just to see whether there were any problems I might not have anticipated.
Sadly, it works. You may compare these and decide what you're doing differently.
If anything, this should help you create a MCVE/SSCCE to repro your issue.
Short Intro
The server accepts connections.
The client initiates a ConnectionRequest. Depending on a runtime flag
_support_tls the server responds with the SECURE capability response (or not).
The client and server upgrade to TLS accordingly.
The client and server execute a very simple protocol session that's
templated on the Stream type, because the implementations are independent of
the transport layer security.
Note, for the example I've used the server.pem from the Asio SSL examples.
The demo then executes servers both with and without SECURE capability, and
verifies that the same simple_client implementation works correctly
against both.
Listing
#include <boost/asio.hpp>
#include <boost/asio/ssl.hpp>
#include <boost/endian/arithmetic.hpp>
#include <boost/beast.hpp>
#include <boost/beast/http.hpp>
#include <iostream>
namespace http = boost::beast::http;
namespace net = boost::asio;
namespace ssl = net::ssl;
using boost::system::error_code;
using net::ip::tcp;
struct ConnectionRequest {
enum { MAGIC = 0x64fc };
boost::endian::big_uint16_t magic = MAGIC;
};
struct ConnectionResponse {
enum Capabilities {
NONE,
SECURE,
};
boost::endian::big_uint16_t capabilities = NONE;
};
using TLSStream = ssl::stream<tcp::socket>;
template <typename> static inline char const* stream_type = "(plain)";
template <> inline char const* stream_type<TLSStream> = "(TLS)";
static void do_shutdown(tcp::socket& s) {
error_code ignored;
s.shutdown(tcp::socket::shutdown_both, ignored);
}
static void do_shutdown(TLSStream& s) {
error_code ignored;
s.shutdown(ignored);
}
template <typename Stream>
struct Session : std::enable_shared_from_this<Session<Stream> > {
Session(Stream&& s) : _stream(std::move(s)) {}
void start() {
std::cout << "start" << type() << ": " << _stream.lowest_layer().remote_endpoint() << std::endl;
http::async_read(_stream, _buf, _req, self_bind(&Session::on_request));
}
private:
Stream _stream;
http::request<http::string_body> _req;
http::response<http::empty_body> _res;
net::streambuf _buf;
void on_request(error_code ec, size_t) {
std::cout << "on_request" << type() << ": " << ec.message() << std::endl;
http::async_write(_stream, _res, self_bind(&Session::on_response));
}
void on_response(error_code ec, size_t) {
std::cout << "on_reponse" << type() << ": " << ec.message() << std::endl;
do_shutdown(_stream);
}
auto type() const { return stream_type<Stream>; }
auto self_bind(auto member) {
return [self=this->shared_from_this(), member](auto&&... args) {
return (*self.*member)(std::forward<decltype(args)>(args)...);
};
}
};
struct Server {
using Ctx = ssl::context;
Server(auto executor, uint16_t port, bool support_tls)
: _acceptor(executor, {{}, port}),
_support_tls(support_tls)
{
if (_support_tls) {
_ctx.set_password_callback(
[](size_t, Ctx::password_purpose) { return "test"; });
_ctx.use_certificate_file("server.pem", Ctx::file_format::pem);
_ctx.use_private_key_file("server.pem", Ctx::file_format::pem);
}
_acceptor.listen();
accept_loop();
}
~Server() {
_acceptor.cancel();
}
private:
void accept_loop() {
_acceptor.async_accept(
make_strand(_acceptor.get_executor()),
[=, this](error_code ec, tcp::socket&& sock) {
if (!ec)
accept_loop();
else
return;
// TODO not async for brevity of sample here
ConnectionRequest req[1];
read(sock, net::buffer(req));
if (req->magic != ConnectionRequest::MAGIC) {
std::cerr << "Invalid ConnectionRequest" << std::endl;
return;
}
ConnectionResponse res[1]{{ConnectionResponse::NONE}};
if (not _support_tls) {
write(sock, net::buffer(res));
std::make_shared<Session<tcp::socket>>(std::move(sock))
->start();
} else {
res->capabilities = ConnectionResponse::SECURE;
write(sock, net::buffer(res));
// and then do the handshake
TLSStream stream(std::move(sock), _ctx);
stream.handshake(TLSStream::handshake_type::server);
std::make_shared<Session<TLSStream>>(std::move(stream))
->start();
}
});
}
tcp::acceptor _acceptor;
Ctx _ctx{Ctx::method::sslv23};
bool _support_tls;
};
// simplistic one-time request/response convo
template <typename Stream> void simple_conversation(Stream& stream) {
http::write(stream, http::request<http::string_body>(
http::verb::get, "/demo/api/v1/ping", 11,
"hello world"));
{
http::response<http::string_body> res;
net::streambuf buf;
http::read(stream, buf, res);
std::cout << "Received: " << res << std::endl;
}
}
void simple_client(auto executor, uint16_t port) {
// client also not async for brevity
tcp::socket sock(executor);
sock.connect({{}, port});
ConnectionRequest req[1];
write(sock, net::buffer(req));
ConnectionResponse res[1];
read(sock, net::buffer(res));
std::cout << "ConnectionResponse: " << res->capabilities << std::endl;
if (res->capabilities != ConnectionResponse::SECURE) {
simple_conversation(sock);
} else {
std::cout << "Server supports TLS, upgrading" << std::endl;
ssl::context ctx{ssl::context::method::sslv23};
TLSStream stream(std::move(sock), ctx);
stream.handshake(TLSStream::handshake_type::client);
simple_conversation(stream);
}
}
int main() {
net::thread_pool ctx;
{
Server plain(ctx.get_executor(), 7878, false);
simple_client(ctx.get_executor(), 7878);
}
{
Server tls(ctx.get_executor(), 7879, true);
simple_client(ctx.get_executor(), 7879);
}
ctx.join();
}
Prints
ConnectionResponse: start(plain): 0
127.0.0.1:37924
on_request(plain): Success
on_reponse(plain): Success
Received: HTTP/1.1 200 OK
ConnectionResponse: 1
Server supports TLS, upgrading
start(TLS): 127.0.0.1:36562
on_request(TLS): Success
on_reponse(TLS): Success
Received: HTTP/1.1 200 OK
The ZeroMQ documentation mentions a zmq_poll as a method for multi-plexing multiple sockets on a single thread. Is there any benefit to polling in a thread that simply consumes data from one socket? Or should I just use zmq_recv?
For example:
/* POLLING A SINGLE SOCKET */
while (true) {
zmq::poll(&items[0], 1, -1);
if (items[0].revents & ZMQ_POLLIN) {
int size = zmq_recv(receiver, msg, 255, 0);
if (size != -1) {
// do something with msg
}
}
}
vs.
/* NO POLLING AND BLOCKING RECV */
while (true) {
int size = zmq_recv(receiver, msg, 255, 0);
if (size != -1) {
// do something with msg
}
}
Is there ever a situation to prefer the version with polling, or should I only use it for multi-plexing? Does polling result in more efficient CPU usage? Does the answer depend on the rate of messages being received?
*** Editing this post to include a toy example ***
The reason for asking this question is that I have observed that I can achieve a much higher throughput on my subscriber if I do not poll (more than an order of magnitude)
#include <thread>
#include <zmq.hpp>
#include <iostream>
#include <unistd.h>
#include <chrono>
using msg_t = char[88];
using timepoint_t = std::chrono::high_resolution_clock::time_point;
using milliseconds = std::chrono::milliseconds;
using microseconds = std::chrono::microseconds;
/* Log stats about how many packets were sent/received */
class SocketStats {
public:
SocketStats(const std::string& name) : m_socketName(name), m_timePrev(now()) {}
void update() {
m_numPackets++;
timepoint_t timeNow = now();
if (duration(timeNow, m_timePrev) > m_logIntervalMs) {
uint64_t packetsPerSec = m_numPackets - m_numPacketsPrev;
std::cout << m_socketName << " : " << "processed " << (packetsPerSec) << " packets" << std::endl;
m_numPacketsPrev = m_numPackets;
m_timePrev = timeNow;
}
}
private:
timepoint_t now() { return std::chrono::steady_clock::now(); }
static milliseconds duration(timepoint_t timeNow, timepoint_t timePrev) {
return std::chrono::duration_cast<milliseconds>(timeNow - timePrev);
}
timepoint_t m_timePrev;
uint64_t m_numPackets = 0;
uint64_t m_numPacketsPrev = 0;
milliseconds m_logIntervalMs = milliseconds{1000};
const std::string m_socketName;
};
/* non-polling subscriber uses blocking receive and no poll */
void startNonPollingSubscriber(){
SocketStats subStats("NonPollingSubscriber");
zmq::context_t ctx(1);
zmq::socket_t sub(ctx, ZMQ_SUB);
sub.connect("tcp://127.0.0.1:5602");
sub.setsockopt(ZMQ_SUBSCRIBE, "", 0);
while (true) {
zmq::message_t msg;
bool success = sub.recv(&msg);
if (success) { subStats.update(); }
}
}
/* polling subscriber receives messages when available */
void startPollingSubscriber(){
SocketStats subStats("PollingSubscriber");
zmq::context_t ctx(1);
zmq::socket_t sub(ctx, ZMQ_SUB);
sub.connect("tcp://127.0.0.1:5602");
sub.setsockopt(ZMQ_SUBSCRIBE, "", 0);
zmq::pollitem_t items [] = {{static_cast<void*>(sub), 0, ZMQ_POLLIN, 0 }};
while (true) {
zmq::message_t msg;
int rc = zmq::poll (&items[0], 1, -1);
if (rc < 1) { continue; }
if (items[0].revents & ZMQ_POLLIN) {
bool success = sub.recv(&msg, ZMQ_DONTWAIT);
if (success) { subStats.update(); }
}
}
}
void startFastPublisher() {
SocketStats pubStats("FastPublisher");
zmq::context_t ctx(1);
zmq::socket_t pub(ctx, ZMQ_PUB);
pub.bind("tcp://127.0.0.1:5602");
while (true) {
msg_t mymessage;
zmq::message_t msg(sizeof(msg_t));
memcpy((char *)msg.data(), (void*)(&mymessage), sizeof(msg_t));
bool success = pub.send(&msg, ZMQ_DONTWAIT);
if (success) { pubStats.update(); }
}
}
int main() {
std::thread t_sub1(startPollingSubscriber);
sleep(1);
std::thread t_sub2(startNonPollingSubscriber);
sleep(1);
std::thread t_pub(startFastPublisher);
while(true) {
sleep(10);
}
}
Q : "Is there any benefit to polling in a thread that simply consumes data from one socket?"
Oh sure there is.
As a principal promoter of non-blocking designs, I always advocate to design zero-waiting .poll()-s before deciding on .recv()-calls.
Q : "Does polling result in more efficient CPU usage?"
A harder one, yet I love it:
This question is decidable in two distinct manners:
a) read the source-code of both the .poll()-method and the .recv()-method, as ported onto your target platform and guesstimate the costs of calling each v/s the other
b) test either of the use-cases inside your run-time ecosystem and have the hard facts micro-benchmarked in-vivo.
Either way, you see the difference.
What you cannot see ATM are the add-on costs and other impacts that appear once you try (or once you are forced to) extend the use-case so as to accomodate other properties, not included in either the former or the latter.
Here, my principal preference to use .poll() before deciding further, enables other priority-based re-ordering of actual .recv()-calls and other, higher level, decisions, that neither the source-code, nor the test could ever decide.
Do not hesitate to test first and if tests will seem to be inconclusive (on your scale of { low | ultra-low }-latency sensitivity), may deep into the source-code to see why.
Typically you will get the best performance by draining the socket before doing another poll. In other words, once poll returns, you read until read returns "no more data", at which point you call poll.
For an example, see https://github.com/nyfix/OZ/blob/4627b0364be80de4451bf1a80a26c00d0ba9310f/src/transport.c#L524
There are trade-offs with this approach (mentioned in the code comments), but if you're only reading from a single socket you can ignore these.
Our company is rewriting most of the legacy C code in C++11. (Which also means I am a C programmer learning C++). I need advice on message handlers.
We have distributed system - Server process sends a packed message over TCP to client process.
In C code this was being done:
- parse message based on type and subtype, which are always the first 2 fields
- call a handler as handler[type](Message *msg)
- handler creates temporary struct say, tmp_struct to hold the parsed values and ..
- calls subhandler[type][subtype](tmp_struct)
There is only one handler per type/subtype.
Moving to C++11 and mutli-threaded environment. The basic idea I had was to -
1) Register a processor object for each type/subtype combination. This is
actually a vector of vectors -
vector< vector >
class MsgProcessor {
// Factory function
virtual Message *create();
virtual Handler(Message *msg)
}
This will be inherited by different message processors
class AMsgProcessor : public MsgProcessor {
Message *create() override();
handler(Message *msg);
}
2) Get the processor using a lookup into the vector of vectors.
Get the message using the overloaded create() factory function.
So that we can keep the actual message and the parsed values inside the message.
3) Now a bit of hack, This message should be send to other threads for the heavy processing. To avoid having to lookup in the vector again, added a pointer to proc inside the message.
class Message {
const MsgProcessor *proc; // set to processor,
// which we got from the first lookup
// to get factory function.
};
So other threads, will just do
Message->proc->Handler(Message *);
This looks bad, but hope, is that this will help to separate message handler from the factory. This is for the case, when multiple type/subtype wants to create same Message, but handle it differently.
I was searching about this and came across :
http://www.drdobbs.com/cpp/message-handling-without-dependencies/184429055?pgno=1
It provides a way to completely separate the message from the handler. But I was wondering if my simple scheme above will be considered an acceptable design or not. Also is this a wrong way of achieving what I want?
Efficiency, as in speed, is the most important requirement from this application. Already we are doing couple of memory Jumbs => 2 vectors + virtual function call the create the message. There are 2 deference to get to the handler, which is not good from caching point of view I guess.
Though your requirement is unclear, I think I have a design that might be what you are looking for.
Check out http://coliru.stacked-crooked.com/a/f7f9d5e7d57e6261 for the fully fledged example.
It has following components:
An interface class for Message processors IMessageProcessor.
A base class representing a Message. Message
A registration class which is essentially a singleton for storing the message processors corresponding to (Type, Subtype) pair. Registrator. It stores the mapping in a unordered_map. You can also tweak it a bit for better performance. All the exposed API's of Registrator are protected by a std::mutex.
Concrete implementations of MessageProcessor. AMsgProcessor and BMsgProcessor in this case.
simulate function to show how it all fits together.
Pasting the code here as well:
/*
* http://stackoverflow.com/questions/40230555/efficient-message-factory-and-handler-in-c
*/
#include <iostream>
#include <vector>
#include <tuple>
#include <mutex>
#include <memory>
#include <cassert>
#include <unordered_map>
class Message;
class IMessageProcessor
{
public:
virtual Message* create() = 0;
virtual void handle_message(Message*) = 0;
virtual ~IMessageProcessor() {};
};
/*
* Base message class
*/
class Message
{
public:
virtual void populate() = 0;
virtual ~Message() {};
};
using Type = int;
using SubType = int;
using TypeCombo = std::pair<Type, SubType>;
using IMsgProcUptr = std::unique_ptr<IMessageProcessor>;
/*
* Registrator class maintains all the registrations in an
* unordered_map.
* This class owns the MessageProcessor instance inside the
* unordered_map.
*/
class Registrator
{
public:
static Registrator* instance();
// Diable other types of construction
Registrator(const Registrator&) = delete;
void operator=(const Registrator&) = delete;
public:
// TypeCombo assumed to be cheap to copy
template <typename ProcT, typename... Args>
std::pair<bool, IMsgProcUptr> register_proc(TypeCombo typ, Args&&... args)
{
auto proc = std::make_unique<ProcT>(std::forward<Args>(args)...);
bool ok;
{
std::lock_guard<std::mutex> _(lock_);
std::tie(std::ignore, ok) = registrations_.insert(std::make_pair(typ, std::move(proc)));
}
return (ok == true) ? std::make_pair(true, nullptr) :
// Return the heap allocated instance back
// to the caller if the insert failed.
// The caller now owns the Processor
std::make_pair(false, std::move(proc));
}
// Get the processor corresponding to TypeCombo
// IMessageProcessor passed is non-owning pointer
// i.e the caller SHOULD not delete it or own it
std::pair<bool, IMessageProcessor*> processor(TypeCombo typ)
{
std::lock_guard<std::mutex> _(lock_);
auto fitr = registrations_.find(typ);
if (fitr == registrations_.end()) {
return std::make_pair(false, nullptr);
}
return std::make_pair(true, fitr->second.get());
}
// TypeCombo assumed to be cheap to copy
bool is_type_used(TypeCombo typ)
{
std::lock_guard<std::mutex> _(lock_);
return registrations_.find(typ) != registrations_.end();
}
bool deregister_proc(TypeCombo typ)
{
std::lock_guard<std::mutex> _(lock_);
return registrations_.erase(typ) == 1;
}
private:
Registrator() = default;
private:
std::mutex lock_;
/*
* Should be replaced with a concurrent map if at all this
* data structure is the main contention point (which I find
* very unlikely).
*/
struct HashTypeCombo
{
public:
std::size_t operator()(const TypeCombo& typ) const noexcept
{
return std::hash<decltype(typ.first)>()(typ.first) ^
std::hash<decltype(typ.second)>()(typ.second);
}
};
std::unordered_map<TypeCombo, IMsgProcUptr, HashTypeCombo> registrations_;
};
Registrator* Registrator::instance()
{
static Registrator inst;
return &inst;
/*
* OR some other DCLP based instance creation
* if lifetime or creation of static is an issue
*/
}
// Define some message processors
class AMsgProcessor final : public IMessageProcessor
{
public:
class AMsg final : public Message
{
public:
void populate() override {
std::cout << "Working on AMsg\n";
}
AMsg() = default;
~AMsg() = default;
};
Message* create() override
{
std::unique_ptr<AMsg> ptr(new AMsg);
return ptr.release();
}
void handle_message(Message* msg) override
{
assert (msg);
auto my_msg = static_cast<AMsg*>(msg);
//.... process my_msg ?
//.. probably being called in some other thread
// Who owns the msg ??
(void)my_msg; // only for suppressing warning
delete my_msg;
return;
}
~AMsgProcessor();
};
AMsgProcessor::~AMsgProcessor()
{
}
class BMsgProcessor final : public IMessageProcessor
{
public:
class BMsg final : public Message
{
public:
void populate() override {
std::cout << "Working on BMsg\n";
}
BMsg() = default;
~BMsg() = default;
};
Message* create() override
{
std::unique_ptr<BMsg> ptr(new BMsg);
return ptr.release();
}
void handle_message(Message* msg) override
{
assert (msg);
auto my_msg = static_cast<BMsg*>(msg);
//.... process my_msg ?
//.. probably being called in some other thread
//Who owns the msg ??
(void)my_msg; // only for suppressing warning
delete my_msg;
return;
}
~BMsgProcessor();
};
BMsgProcessor::~BMsgProcessor()
{
}
TypeCombo read_from_network()
{
return {1, 2};
}
struct ParsedData {
};
Message* populate_message(Message* msg, ParsedData& pdata)
{
// Do something with the message
// Calling a dummy populate method now
msg->populate();
(void)pdata;
return msg;
}
void simulate()
{
TypeCombo typ = read_from_network();
bool ok;
IMessageProcessor* proc = nullptr;
std::tie(ok, proc) = Registrator::instance()->processor(typ);
if (!ok) {
std::cerr << "FATAL!!!" << std::endl;
return;
}
ParsedData parsed_data;
//..... populate parsed_data here ....
proc->handle_message(populate_message(proc->create(), parsed_data));
return;
}
int main() {
/*
* TODO: Not making use or checking the return types after calling register
* its a must in production code!!
*/
// Register AMsgProcessor
Registrator::instance()->register_proc<AMsgProcessor>(std::make_pair(1, 1));
Registrator::instance()->register_proc<BMsgProcessor>(std::make_pair(1, 2));
simulate();
return 0;
}
UPDATE 1
The major source of confusion here seems to be because the architecture of the even system is unknown.
Any self respecting event system architecture would look something like below:
A pool of threads polling on the socket descriptors.
A pool of threads for handling timer related events.
Comparatively small number (depends on application) of threads to do long blocking jobs.
So, in your case:
You will get network event on the thread doing epoll_wait or select or poll.
Read the packet completely and get the processor using Registrator::get_processor call.
NOTE: get_processor call can be made without any locking if one can guarantee that the underlying unordered_map does not get modified i.e no new inserts would be made once we start receiving events.
Using the obtained processor we can get the Message and populate it.
Now, this is the part that I am not that sure of how you want it to be. At this point, we have the processor on which you can call handle_message either from the current thread i.e the thread which is doing epoll_wait or dispatch it to another thread by posting the job (Processor and Message) to that threads receiving queue.
I'm writing a UDP server that currently receives data from UDP wraps it up in an object and places them into a concurrent queue. The concurrent queue is the implementation provided here: http://www.justsoftwaresolutions.co.uk/threading/implementing-a-thread-safe-queue-using-condition-variables.html
A pool of worker threads pull data out of the queue for processing.
The queue is defined globally as:
static concurrent_queue<boost::shared_ptr<Msg> > g_work_queue_;
Now the problem I'm having is that if I simply write a function to produce data and insert it into the queue and create some consumer threads to pull them out it works fine.
But the moment I add my UDP based producer the worker threads stop being notified of the arrival of data on the queue.
I've tracked the issue down to the end of the push function in concurrent_queue.
Specifically the line: the_condition_variable.notify_one();
Does not return when using my network code.
So the problem is related to the way I've written the networking code.
Here is what it looks like.
enum
{
MAX_LENGTH = 1500
};
class Msg
{
public:
Msg()
{
static int i = 0;
i_ = i++;
printf("Construct ObbsMsg: %d\n", i_);
}
~Msg()
{
printf("Destruct ObbsMsg: %d\n", i_);
}
const char* toString() { return data_; }
private:
friend class server;
udp::endpoint sender_endpoint_;
char data_[MAX_LENGTH];
int i_;
};
class server
{
public:
server::server(boost::asio::io_service& io_service)
: io_service_(io_service),
socket_(io_service, udp::endpoint(udp::v4(), PORT))
{
waitForNextMessage();
}
void server::waitForNextMessage()
{
printf("Waiting for next msg\n");
next_msg_.reset(new Msg());
socket_.async_receive_from(
boost::asio::buffer(next_msg_->data_, MAX_LENGTH), sender_endpoint_,
boost::bind(&server::handleReceiveFrom, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void server::handleReceiveFrom(const boost::system::error_code& error, size_t bytes_recvd)
{
if (!error && bytes_recvd > 0) {
printf("got data: %s. Adding to work queue\n", next_msg_->toString());
g_work_queue.push(next_msg_); // Add received msg to work queue
waitForNextMessage();
} else {
waitForNextMessage();
}
}
private:
boost::asio::io_service& io_service_;
udp::socket socket_;
udp::endpoint sender_endpoint_;
boost::shared_ptr<Msg> next_msg_;
}
int main(int argc, char* argv[])
{
try{
boost::asio::io_service io_service;
server s(io_service);
io_service.run();
catch(std::exception& e){
std::err << "Exception: " << e.what() << std::endl;
}
return 0;
}
Now I've found that if handle_receive_from is able to return then notify_one() in concurrent_queue returns. So I think it's because I have a recursive loop.
So what's the correct way to start listening for new data? and is the async udp server example flawed as I based it off what they were already doing.
EDIT: Ok the issue just got even weirder.
What I haven't mentioned here is that I have a class called processor.
Processor looks like this:
class processor
{
public:
processor::processor(int thread_pool_size) :
thread_pool_size_(thread_pool_size) { }
void start()
{
boost::thread_group threads;
for (std::size_t i = 0; i < thread_pool_size_; ++i){
threads.create_thread(boost::bind(&ObbsServer::worker, this));
}
}
void worker()
{
while (true){
boost::shared_ptr<ObbsMsg> msg;
g_work_queue.wait_and_pop(msg);
printf("Got msg: %s\n", msg->toString());
}
}
private:
int thread_pool_size_;
};
Now it seems that if I extract the worker function out on it's own and start the threads
from main. it works! Can someone explain why a thread functions as I would expect outside of a class, but inside it's got side effects?
EDIT2: Now it's getting even weirder still
I pulled out two functions (exactly the same).
One is called consumer, the other worker.
i.e.
void worker()
{
while (true){
boost::shared_ptr<ObbsMsg> msg;
printf("waiting for msg\n");
g_work_queue.wait_and_pop(msg);
printf("Got msg: %s\n", msg->toString());
}
}
void consumer()
{
while (true){
boost::shared_ptr<ObbsMsg> msg;
printf("waiting for msg\n");
g_work_queue.wait_and_pop(msg);
printf("Got msg: %s\n", msg->toString());
}
}
Now, consumer lives at the top of the server.cpp file. I.e. where our server code lives as well.
On the other hand, worker lives in the processor.cpp file.
Now I'm not using processor at all at the moment. The main function now looks like this:
void consumer();
void worker();
int main(int argc, char* argv[])
{
try {
boost::asio::io_service io_service;
server net(io_service);
//processor s(7);
boost::thread_group threads;
for (std::size_t i = 0; i < 7; ++i){
threads.create_thread(worker); // this doesn't work
// threads.create_thread(consumer); // THIS WORKS!?!?!?
}
// s.start();
printf("Server Started...\n");
boost::asio::io_service::work work(io_service);
io_service.run();
printf("exiting...\n");
} catch (std::exception& e) {
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
Why is it that consumer is able to receive the queued items, but worker is not.
They are identical implementations with different names.
This isn't making any sense. Any ideas?
Here is the sample output when receiving the txt "Hello World":
Output 1: not working. When calling worker function or using the processor class.
Construct ObbsMsg: 0
waiting for msg
waiting for msg
waiting for msg
waiting for msg
waiting for msg
waiting for msg
Server Started...
waiting for msg
got data: hello world. Adding to work queue
Construct ObbsMsg: 1
Output 2: works when calling the consumer function which is identical to the worker function.
Construct ObbsMsg: 0
waiting for msg
waiting for msg
waiting for msg
waiting for msg
waiting for msg
waiting for msg
Server Started...
waiting for msg
got data: hello world. Adding to work queue
Construct ObbsMsg: 1
Got msg: hello world <----- this is what I've been wanting to see!
Destruct ObbsMsg: 0
waiting for msg
To answer my own question.
It seems the problem is to do with the declaration of g_work_queue;
Declared in a header file as: static concurrent_queue< boost::shared_ptr > g_work_queue;
It seems that declaring it static is not what I want to be doing.
Apparently that creates a separate queue object for each compiled .o file and obviously
separate locks etc.
This explains why when the queue was being manipulated inside the same source file
with a consumer and producer in the same file it worked.
But when in different files it did not because the threads were actually waiting on different objects.
So I've redeclared the work queue like so.
-- workqueue.h --
extern concurrent_queue< boost::shared_ptr<Msg> > g_work_queue;
-- workqueue.cpp --
#include "workqueue.h"
concurrent_queue< boost::shared_ptr<Msg> > g_work_queue;
Doing this fixes the problem.
I am trying to convert some existing code to use boost's asio tcp sockets instead of our current implementation. I am able to get a very similar example (of a chat client/server) from the boost site working, but when I attempt to put the code into my own program it stops working.
What I am doing:
Start a server process
The server process makes an empty socket and uses it to listen (with a tcp::acceptor) for TCP connections on a port (10010 for example)
Start a client process
Have the client process create a socket connect to the server's port
When the server sees a client is connecting, it starts listening for data(with async_read) on the socket and creates another empty socket to listen for another TCP connection on the port
When the client sees that the server has connected, it sends 100 bytes of data (with async_write) and waits for the socket to tell it the send is finished...when that happens it prints a message and shuts down
When the server gets notified that its has data that has been read, it prints a message and shuts down
Obviously, I have greatly trimmed this code down from what I'm trying to implement, this is as small as I could make something that reproduces the problem. I'm running on windows and have a visual studio solution file you can get. There's some memory leaks, thread safety problems, and such, but that's because I'm taking stuff out of existing code, so don't worry about them.
Anyway, here's the files one header with some common stuff, a server, and a client.
Connection.hpp:
#ifndef CONNECTION_HPP
#define CONNECTION_HPP
#include
#include
#include
class ConnectionTransfer
{
public:
ConnectionTransfer(char* buffer, unsigned int size) :
buffer_(buffer), size_(size) {
}
virtual ~ConnectionTransfer(void){}
char* GetBuffer(){return buffer_;}
unsigned int GetSize(){return size_;}
virtual void CallbackForFinished() = 0;
protected:
char* buffer_;
unsigned int size_;
};
class ConnectionTransferInProgress
{
public:
ConnectionTransferInProgress(ConnectionTransfer* ct):
ct_(ct)
{}
~ConnectionTransferInProgress(void){}
void operator()(const boost::system::error_code& error){Other(error);}
void Other(const boost::system::error_code& error){
if(!error)
ct_->CallbackForFinished();
}
private:
ConnectionTransfer* ct_;
};
class Connection
{
public:
Connection(boost::asio::io_service& io_service):
sock_(io_service)
{}
~Connection(void){}
void AsyncSend(ConnectionTransfer* ct){
ConnectionTransferInProgress tip(ct);
//sock_->async_send(boost::asio::buffer(ct->GetBuffer(),
// static_cast(ct->GetSize())), tip);
boost::asio::async_write(sock_, boost::asio::buffer(ct->GetBuffer(),
static_cast(ct->GetSize())), boost::bind(
&ConnectionTransferInProgress::Other, tip, boost::asio::placeholders::error));
}
void AsyncReceive(ConnectionTransfer* ct){
ConnectionTransferInProgress tip(ct);
//sock_->async_receive(boost::asio::buffer(ct->GetBuffer(),
// static_cast(ct->GetSize())), tip);
boost::asio::async_read(sock_, boost::asio::buffer(ct->GetBuffer(),
static_cast(ct->GetSize())), boost::bind(
&ConnectionTransferInProgress::Other, tip, boost::asio::placeholders::error));
}
boost::asio::ip::tcp::socket& GetSocket(){return sock_;}
private:
boost::asio::ip::tcp::socket sock_;
};
#endif //CONNECTION_HPP
BoostConnectionClient.cpp:
#include "Connection.hpp"
#include
#include
#include
#include
using namespace boost::asio::ip;
bool connected;
bool gotTransfer;
class FakeTransfer : public ConnectionTransfer
{
public:
FakeTransfer(char* buffer, unsigned int size) : ConnectionTransfer(buffer, size)
{
}
void CallbackForFinished()
{
gotTransfer = true;
}
};
void ConnectHandler(const boost::system::error_code& error)
{
if(!error)
connected = true;
}
int main(int argc, char* argv[])
{
connected = false;
gotTransfer = false;
boost::asio::io_service io_service;
Connection* conn = new Connection(io_service);
tcp::endpoint ep(address::from_string("127.0.0.1"), 10011);
conn->GetSocket().async_connect(ep, ConnectHandler);
boost::thread t(boost::bind(&boost::asio::io_service::run, &io_service));
while(!connected)
{
boost::this_thread::sleep(boost::posix_time::millisec(1));
}
std::cout (angle brackets here) "Connected\n";
char data[100];
FakeTransfer* ft = new FakeTransfer(data, 100);
conn->AsyncReceive(ft);
while(!gotTransfer)
{
boost::this_thread::sleep(boost::posix_time::millisec(1));
}
std::cout (angle brackets here) "Done\n";
return 0;
}
BoostConnectionServer.cpp:
#include "Connection.hpp"
#include
#include
#include
#include
using namespace boost::asio::ip;
Connection* conn1;
bool conn1Done;
bool gotTransfer;
Connection* conn2;
class FakeAcceptor
{
public:
FakeAcceptor(boost::asio::io_service& io_service, const tcp::endpoint& endpoint)
:
io_service_(io_service),
acceptor_(io_service, endpoint)
{
conn1 = new Connection(io_service_);
acceptor_.async_accept(conn1->GetSocket(),
boost::bind(&FakeAcceptor::HandleAccept, this, conn1,
boost::asio::placeholders::error));
}
void HandleAccept(Connection* conn, const boost::system::error_code& error)
{
if(conn == conn1)
conn1Done = true;
conn2 = new Connection(io_service_);
acceptor_.async_accept(conn2->GetSocket(),
boost::bind(&FakeAcceptor::HandleAccept, this, conn2,
boost::asio::placeholders::error));
}
boost::asio::io_service& io_service_;
tcp::acceptor acceptor_;
};
class FakeTransfer : public ConnectionTransfer
{
public:
FakeTransfer(char* buffer, unsigned int size) : ConnectionTransfer(buffer, size)
{
}
void CallbackForFinished()
{
gotTransfer = true;
}
};
int main(int argc, char* argv[])
{
boost::asio::io_service io_service;
conn1Done = false;
gotTransfer = false;
tcp::endpoint endpoint(tcp::v4(), 10011);
FakeAcceptor fa(io_service, endpoint);
boost::thread t(boost::bind(&boost::asio::io_service::run, &io_service));
while(!conn1Done)
{
boost::this_thread::sleep(boost::posix_time::millisec(1));
}
std::cout (angle brackets here) "Accepted incoming connection\n";
char data[100];
FakeTransfer* ft = new FakeTransfer(data, 100);
conn1->AsyncReceive(ft);
while(!gotTransfer)
{
boost::this_thread::sleep(boost::posix_time::millisec(1));
}
std::cout (angle brackets here) "Success!\n";
return 0;
}
I've searched around a bit, but haven't had much luck. As far as I can tell, I'm almost exactly matching the sample, so it must be something small that I'm overlooking.
Thanks!
In your client code, your ConnectHandler() callback function just sets a value and then returns, without posting any more work to the io_service. At that point, that async_connect() operation is the only work associated with the io_service; so when ConnectHandler() returns, there is no more work associated with the io_service. Thus the background thread's call to io_service.run() returns, and the thread exits.
One potential option would be to call conn->AsyncReceive() from within ConnectHandler(), so that the async_read() gets called prior to the ConnectHandler() returning and thus the background thread's call to io_service.run() won't return.
Another option, the more trivial one, would be to instantiate an io_service::work instance prior to creating your thread to call io_service::run (technically, you could do this at any point prior to the io_service.run() call's returning):
...
// some point in the main() method, prior to creating the background thread
boost::asio::io_service::work work(io_service)
...
This is documented in the io_service documentation:
Stopping the io_service from running out of work
Some applications may need to prevent an io_service object's run() call from returning when there is no more work to do. For example, the io_service may be being run in a background thread that is launched prior to the application's asynchronous operations. The run() call may be kept running by creating an object of type io_service::work:
http://www.boost.org/doc/libs/1_43_0/doc/html/boost_asio/reference/io_service.html