boost::asio tcp async_read never returns - boost

I am trying to convert some existing code to use boost's asio tcp sockets instead of our current implementation. I am able to get a very similar example (of a chat client/server) from the boost site working, but when I attempt to put the code into my own program it stops working.
What I am doing:
Start a server process
The server process makes an empty socket and uses it to listen (with a tcp::acceptor) for TCP connections on a port (10010 for example)
Start a client process
Have the client process create a socket connect to the server's port
When the server sees a client is connecting, it starts listening for data(with async_read) on the socket and creates another empty socket to listen for another TCP connection on the port
When the client sees that the server has connected, it sends 100 bytes of data (with async_write) and waits for the socket to tell it the send is finished...when that happens it prints a message and shuts down
When the server gets notified that its has data that has been read, it prints a message and shuts down
Obviously, I have greatly trimmed this code down from what I'm trying to implement, this is as small as I could make something that reproduces the problem. I'm running on windows and have a visual studio solution file you can get. There's some memory leaks, thread safety problems, and such, but that's because I'm taking stuff out of existing code, so don't worry about them.
Anyway, here's the files one header with some common stuff, a server, and a client.
Connection.hpp:
#ifndef CONNECTION_HPP
#define CONNECTION_HPP
#include
#include
#include
class ConnectionTransfer
{
public:
ConnectionTransfer(char* buffer, unsigned int size) :
buffer_(buffer), size_(size) {
}
virtual ~ConnectionTransfer(void){}
char* GetBuffer(){return buffer_;}
unsigned int GetSize(){return size_;}
virtual void CallbackForFinished() = 0;
protected:
char* buffer_;
unsigned int size_;
};
class ConnectionTransferInProgress
{
public:
ConnectionTransferInProgress(ConnectionTransfer* ct):
ct_(ct)
{}
~ConnectionTransferInProgress(void){}
void operator()(const boost::system::error_code& error){Other(error);}
void Other(const boost::system::error_code& error){
if(!error)
ct_->CallbackForFinished();
}
private:
ConnectionTransfer* ct_;
};
class Connection
{
public:
Connection(boost::asio::io_service& io_service):
sock_(io_service)
{}
~Connection(void){}
void AsyncSend(ConnectionTransfer* ct){
ConnectionTransferInProgress tip(ct);
//sock_->async_send(boost::asio::buffer(ct->GetBuffer(),
// static_cast(ct->GetSize())), tip);
boost::asio::async_write(sock_, boost::asio::buffer(ct->GetBuffer(),
static_cast(ct->GetSize())), boost::bind(
&ConnectionTransferInProgress::Other, tip, boost::asio::placeholders::error));
}
void AsyncReceive(ConnectionTransfer* ct){
ConnectionTransferInProgress tip(ct);
//sock_->async_receive(boost::asio::buffer(ct->GetBuffer(),
// static_cast(ct->GetSize())), tip);
boost::asio::async_read(sock_, boost::asio::buffer(ct->GetBuffer(),
static_cast(ct->GetSize())), boost::bind(
&ConnectionTransferInProgress::Other, tip, boost::asio::placeholders::error));
}
boost::asio::ip::tcp::socket& GetSocket(){return sock_;}
private:
boost::asio::ip::tcp::socket sock_;
};
#endif //CONNECTION_HPP
BoostConnectionClient.cpp:
#include "Connection.hpp"
#include
#include
#include
#include
using namespace boost::asio::ip;
bool connected;
bool gotTransfer;
class FakeTransfer : public ConnectionTransfer
{
public:
FakeTransfer(char* buffer, unsigned int size) : ConnectionTransfer(buffer, size)
{
}
void CallbackForFinished()
{
gotTransfer = true;
}
};
void ConnectHandler(const boost::system::error_code& error)
{
if(!error)
connected = true;
}
int main(int argc, char* argv[])
{
connected = false;
gotTransfer = false;
boost::asio::io_service io_service;
Connection* conn = new Connection(io_service);
tcp::endpoint ep(address::from_string("127.0.0.1"), 10011);
conn->GetSocket().async_connect(ep, ConnectHandler);
boost::thread t(boost::bind(&boost::asio::io_service::run, &io_service));
while(!connected)
{
boost::this_thread::sleep(boost::posix_time::millisec(1));
}
std::cout (angle brackets here) "Connected\n";
char data[100];
FakeTransfer* ft = new FakeTransfer(data, 100);
conn->AsyncReceive(ft);
while(!gotTransfer)
{
boost::this_thread::sleep(boost::posix_time::millisec(1));
}
std::cout (angle brackets here) "Done\n";
return 0;
}
BoostConnectionServer.cpp:
#include "Connection.hpp"
#include
#include
#include
#include
using namespace boost::asio::ip;
Connection* conn1;
bool conn1Done;
bool gotTransfer;
Connection* conn2;
class FakeAcceptor
{
public:
FakeAcceptor(boost::asio::io_service& io_service, const tcp::endpoint& endpoint)
:
io_service_(io_service),
acceptor_(io_service, endpoint)
{
conn1 = new Connection(io_service_);
acceptor_.async_accept(conn1->GetSocket(),
boost::bind(&FakeAcceptor::HandleAccept, this, conn1,
boost::asio::placeholders::error));
}
void HandleAccept(Connection* conn, const boost::system::error_code& error)
{
if(conn == conn1)
conn1Done = true;
conn2 = new Connection(io_service_);
acceptor_.async_accept(conn2->GetSocket(),
boost::bind(&FakeAcceptor::HandleAccept, this, conn2,
boost::asio::placeholders::error));
}
boost::asio::io_service& io_service_;
tcp::acceptor acceptor_;
};
class FakeTransfer : public ConnectionTransfer
{
public:
FakeTransfer(char* buffer, unsigned int size) : ConnectionTransfer(buffer, size)
{
}
void CallbackForFinished()
{
gotTransfer = true;
}
};
int main(int argc, char* argv[])
{
boost::asio::io_service io_service;
conn1Done = false;
gotTransfer = false;
tcp::endpoint endpoint(tcp::v4(), 10011);
FakeAcceptor fa(io_service, endpoint);
boost::thread t(boost::bind(&boost::asio::io_service::run, &io_service));
while(!conn1Done)
{
boost::this_thread::sleep(boost::posix_time::millisec(1));
}
std::cout (angle brackets here) "Accepted incoming connection\n";
char data[100];
FakeTransfer* ft = new FakeTransfer(data, 100);
conn1->AsyncReceive(ft);
while(!gotTransfer)
{
boost::this_thread::sleep(boost::posix_time::millisec(1));
}
std::cout (angle brackets here) "Success!\n";
return 0;
}
I've searched around a bit, but haven't had much luck. As far as I can tell, I'm almost exactly matching the sample, so it must be something small that I'm overlooking.
Thanks!

In your client code, your ConnectHandler() callback function just sets a value and then returns, without posting any more work to the io_service. At that point, that async_connect() operation is the only work associated with the io_service; so when ConnectHandler() returns, there is no more work associated with the io_service. Thus the background thread's call to io_service.run() returns, and the thread exits.
One potential option would be to call conn->AsyncReceive() from within ConnectHandler(), so that the async_read() gets called prior to the ConnectHandler() returning and thus the background thread's call to io_service.run() won't return.
Another option, the more trivial one, would be to instantiate an io_service::work instance prior to creating your thread to call io_service::run (technically, you could do this at any point prior to the io_service.run() call's returning):
...
// some point in the main() method, prior to creating the background thread
boost::asio::io_service::work work(io_service)
...
This is documented in the io_service documentation:
Stopping the io_service from running out of work
Some applications may need to prevent an io_service object's run() call from returning when there is no more work to do. For example, the io_service may be being run in a background thread that is launched prior to the application's asynchronous operations. The run() call may be kept running by creating an object of type io_service::work:
http://www.boost.org/doc/libs/1_43_0/doc/html/boost_asio/reference/io_service.html

Related

Boost asio: including <arpa/inet.h> causes socket to always output 0 bytes

I'm trying to include <arpa/inet.h> in a low-level library so that I have access to hton* and ntoh* functions in the library. The low-level library gets called into by higher-level code running a Boost asio socket. I'm aware Boost asio contains the hton* and ntoh* functions, but i'd like to avoid linking all of Boost asio to the library since hton*/ntoh* are all I need.
However, if I simply include <arpa/inet.h> in the low-level library, 0 bytes always will be sent from the Boost asio socket. Confirmed by Wireshark.
Here's the class where i'd like to include <arpa/inet.h> but not Boost. If <arpa/inet.h> is included, 0 bytes will be sent.
#pragma pack(push, 1)
#include "PduHeader.h"
#include <arpa/inet.h>
class ClientInfoPdu
{
public:
ClientInfoPdu(const uint16_t _client_receiver_port)
{
set_client_receiver_port(_client_receiver_port);
}
PduHeader pdu_header{CLIENT_INFO_PDU, sizeof(client_receiver_port)};
inline void set_client_receiver_port(const uint16_t _client_receiver_port)
{
//client_receiver_port = htons(_client_receiver_port);
client_receiver_port = _client_receiver_port;
}
inline uint16_t get_client_receiver_port()
{
return client_receiver_port;
}
inline size_t get_total_size()
{
return sizeof(PduHeader) + pdu_header.get_pdu_payload_size();
}
private:
uint16_t client_receiver_port;
};
#pragma pack(pop)
Here's the higher level code that includes Boost and attempts to send the data via a socket. The printout indicates 5 bytes were sent, however 0 bytes were actually sent.
#include "ServerConnectionThread.h"
#include "config/ClientConfig.h"
#include "protocol_common/ClientInfoPdu.h"
#include <boost/asio.hpp>
#include <unistd.h>
using boost::asio::ip::udp;
void ServerConnectionThread::execute()
{
boost::asio::io_service io_service;
udp::endpoint remote_endpoint =
udp::endpoint(boost::asio::ip::address::from_string(SERVER_IP), SERVER_PORT);
udp::socket socket(io_service);
socket.open(udp::v4());
ClientInfoPdu client_info_pdu = ClientInfoPdu(RECEIVE_PORT);
while (true)
{
uint16_t total_size = client_info_pdu.get_total_size();
socket.send_to(boost::asio::buffer(&client_info_pdu, total_size), remote_endpoint);
printf("sent %u bytes\n", total_size);
usleep(1000000);
}
}
Again, simply removing "#include <arpa/inet.h>" will cause this code to function as expected and send 5 bytes per packet.
How is ClientInfoPdu defined? This looks like it is likely UB:
boost::asio::buffer(&client_info_pdu, total_size)
The thing is total size is sizeof(PduHeader) + pdu_header.get_pdu_payload_size() (so sizeof(PduHeader) + 2);
First problem is that you're mixing access modifiers, killing the POD/standard_layout properties of your types.
#include <type_traits>
static_assert(std::is_standard_layout_v<PduHeader> && std::is_trivial_v<PduHeader>);
static_assert(std::is_standard_layout_v<ClientInfoPdu> && std::is_trivial_v<ClientInfoPdu>);
This will fail to compile. Treating the types as POD (as you do) invokes
Undefined Behaviour.
This is likely the explanation for the fact that "it stops working" with some changes. It never worked: it might just accidentally have appeared to work, but it was undefined behaviour.
It's not easy to achieve POD-ness while still getting the convenience of the
constructors. In fact, I don't think that's possible. In short, if you want to
treat your structs as C-style POD types, make them... C-style POD types.
Another thing: a possible implementation of `PduHeader I
can see working for you looks a bit like so:
enum MsgId{CLIENT_INFO_PDU=0x123};
struct PduHeader {
MsgId id;
size_t payload_size;
size_t get_pdu_payload_size() const { return payload_size; }
};
Here, again you might have/need endianness conversions.
Suggestion
In short, if you want this to work, I'd say keep it simple.
Instead of creating non-POD types all over the place that are responsible for endianness conversion by adding getters/setters for each value, why not create a simple user-defined-type that does this always, and use them instead?
struct PduHeader {
Short id; // or e.g. uint8_t
Long payload_size;
};
struct ClientInfoPdu {
PduHeader pdu_header; // or inheritance, same effect
Short client_receiver_port;
};
Then just use it as a POD struct:
while (true) {
ClientInfoPdu client_info_pdu;
init_pdu(client_info_pdu);
auto n = socket.send_to(boost::asio::buffer(&client_info_pdu, sizeof(client_info_pdu)), remote_endpoint);
printf("sent %lu bytes\n", n);
std::this_thread::sleep_for(1s);
}
The function init_pdu can be implemented with overloads per submessage:
void init_pdu(ClientInfoPdu& msg) {
msg.pdu_header.id = CLIENT_INFO_PDU;
msg.pdu_header.payload_size = sizeof(msg);
}
There are variations on this where it can become a template or take a
PduHeder& (if your message inherits instead of aggregates). But the basic
principle is the same.
Endianness Conversion
Now you'll noticed I avoided using uint32_t/uint16_t directly (though uint8_t is fine because it doesn't need byte ordering). Instead, you could define Long and Short as simple POD wrappers around them:
struct Short {
operator uint16_t() const { return ntohs(value); }
Short& operator=(uint16_t v) { value = htons(v); return *this; }
private:
uint16_t value;
};
struct Long {
operator uint32_t() const { return ntohl(value); }
Long& operator=(uint32_t v) { value = htonl(v); return *this; }
private:
uint32_t value;
};
The assignment and conversions mean that you can use it as just another
int32_t/int16_t except that the necessary conversions are always done.
If you want to satnd on the shoulder of giants instead, you can use the better types from Boost Endian, which also has lots more advanced facilities
DEMO
Live On Coliru
#include <type_traits>
#include <cstdint>
#include <thread>
#include <arpa/inet.h>
using namespace std::chrono_literals;
#pragma pack(push, 1)
enum MsgId{CLIENT_INFO_PDU=0x123};
struct Short {
operator uint16_t() const { return ntohs(value); }
Short& operator=(uint16_t v) { value = htons(v); return *this; }
private:
uint16_t value;
};
struct Long {
operator uint32_t() const { return ntohl(value); }
Long& operator=(uint32_t v) { value = htonl(v); return *this; }
private:
uint32_t value;
};
static_assert(std::is_standard_layout_v<Short>);
static_assert(std::is_trivial_v<Short>);
static_assert(std::is_standard_layout_v<Long>);
static_assert(std::is_trivial_v<Long>);
struct PduHeader {
Short id; // or e.g. uint8_t
Long payload_size;
};
struct ClientInfoPdu {
PduHeader pdu_header; // or inheritance, same effect
Short client_receiver_port;
};
void init_pdu(ClientInfoPdu& msg) {
msg.pdu_header.id = CLIENT_INFO_PDU;
msg.pdu_header.payload_size = sizeof(msg);
}
static_assert(std::is_standard_layout_v<PduHeader> && std::is_trivial_v<PduHeader>);
static_assert(std::is_standard_layout_v<ClientInfoPdu> && std::is_trivial_v<ClientInfoPdu>);
#pragma pack(pop)
#include <boost/asio.hpp>
//#include <unistd.h>
using boost::asio::ip::udp;
#define SERVER_IP "127.0.0.1"
#define SERVER_PORT 6767
#define RECEIVE_PORT 6868
struct ServerConnectionThread {
void execute() {
boost::asio::io_service io_service;
udp::endpoint const remote_endpoint =
udp::endpoint(boost::asio::ip::address::from_string(SERVER_IP), SERVER_PORT);
udp::socket socket(io_service);
socket.open(udp::v4());
while (true) {
ClientInfoPdu client_info_pdu;
init_pdu(client_info_pdu);
auto n = socket.send_to(boost::asio::buffer(&client_info_pdu, sizeof(client_info_pdu)), remote_endpoint);
printf("sent %lu bytes\n", n);
std::this_thread::sleep_for(1s);
}
}
};
int main(){ }

How to implement a connection pool with Boost asio?

I have thousands of concurrent client threads with C10K synchronized request, newing a tcp connect when request comes might be time-consuming, and it cannot be scalable if the number of concurrent threads goes up.
So I thought a shared connection(tcp) pool might be a good choice for me, all connects(tcp) within the pool are activated, all requests share the pool, pull out when need to send message, push in when receiving message done.
But how can I achieve that? By the way, my server side conforms a Mulit-threading Async mode.
The piece code for connection(client-side):
class SyncTCPClient {
public:
SyncTCPClient(const std::string &raw_ip_address, unsigned short port_num) :
m_ep(asio::ip::address::from_string(raw_ip_address), port_num), m_sock(m_ios) {
m_sock.open(m_ep.protocol());
}
void connect() {
m_sock.connect(m_ep);
}
//...
private:
asio::io_service m_ios;
asio::ip::tcp::endpoint m_ep;
asio::ip::tcp::socket m_sock;
};
The piece code for connection pool(client-side):
class ConnectionPool {
private:
std::queue<std::shared_ptr<SyncTCPClient>> pool;
std::mutex mtx;
std::condition_variable no_empty;
public:
ConnectionPool(int size) {
std::cout << "Constructor for connection pool." << std::endl;
for (int i = 0; i < size; i++) {
std::shared_ptr<SyncTCPClient> cliPtr = std::make_shared<SyncTCPClient>(raw_ip_address, port_num);
cliPtr->connect();
pool.push(cliPtr);
}
}
//...
};
The piece code for connection acceptance(server-side):
std::shared_ptr<asio::ip::tcp::socket> sock = std::make_shared<asio::ip::tcp::socket>(m_ios);
m_acceptor.async_accept(*sock.get(),
[this, sock](const boost::system::error_code& error) {
onAccept(error, sock);
}
);
Much appreciated-)

Efficient message factory and handler in C++

Our company is rewriting most of the legacy C code in C++11. (Which also means I am a C programmer learning C++). I need advice on message handlers.
We have distributed system - Server process sends a packed message over TCP to client process.
In C code this was being done:
- parse message based on type and subtype, which are always the first 2 fields
- call a handler as handler[type](Message *msg)
- handler creates temporary struct say, tmp_struct to hold the parsed values and ..
- calls subhandler[type][subtype](tmp_struct)
There is only one handler per type/subtype.
Moving to C++11 and mutli-threaded environment. The basic idea I had was to -
1) Register a processor object for each type/subtype combination. This is
actually a vector of vectors -
vector< vector >
class MsgProcessor {
// Factory function
virtual Message *create();
virtual Handler(Message *msg)
}
This will be inherited by different message processors
class AMsgProcessor : public MsgProcessor {
Message *create() override();
handler(Message *msg);
}
2) Get the processor using a lookup into the vector of vectors.
Get the message using the overloaded create() factory function.
So that we can keep the actual message and the parsed values inside the message.
3) Now a bit of hack, This message should be send to other threads for the heavy processing. To avoid having to lookup in the vector again, added a pointer to proc inside the message.
class Message {
const MsgProcessor *proc; // set to processor,
// which we got from the first lookup
// to get factory function.
};
So other threads, will just do
Message->proc->Handler(Message *);
This looks bad, but hope, is that this will help to separate message handler from the factory. This is for the case, when multiple type/subtype wants to create same Message, but handle it differently.
I was searching about this and came across :
http://www.drdobbs.com/cpp/message-handling-without-dependencies/184429055?pgno=1
It provides a way to completely separate the message from the handler. But I was wondering if my simple scheme above will be considered an acceptable design or not. Also is this a wrong way of achieving what I want?
Efficiency, as in speed, is the most important requirement from this application. Already we are doing couple of memory Jumbs => 2 vectors + virtual function call the create the message. There are 2 deference to get to the handler, which is not good from caching point of view I guess.
Though your requirement is unclear, I think I have a design that might be what you are looking for.
Check out http://coliru.stacked-crooked.com/a/f7f9d5e7d57e6261 for the fully fledged example.
It has following components:
An interface class for Message processors IMessageProcessor.
A base class representing a Message. Message
A registration class which is essentially a singleton for storing the message processors corresponding to (Type, Subtype) pair. Registrator. It stores the mapping in a unordered_map. You can also tweak it a bit for better performance. All the exposed API's of Registrator are protected by a std::mutex.
Concrete implementations of MessageProcessor. AMsgProcessor and BMsgProcessor in this case.
simulate function to show how it all fits together.
Pasting the code here as well:
/*
* http://stackoverflow.com/questions/40230555/efficient-message-factory-and-handler-in-c
*/
#include <iostream>
#include <vector>
#include <tuple>
#include <mutex>
#include <memory>
#include <cassert>
#include <unordered_map>
class Message;
class IMessageProcessor
{
public:
virtual Message* create() = 0;
virtual void handle_message(Message*) = 0;
virtual ~IMessageProcessor() {};
};
/*
* Base message class
*/
class Message
{
public:
virtual void populate() = 0;
virtual ~Message() {};
};
using Type = int;
using SubType = int;
using TypeCombo = std::pair<Type, SubType>;
using IMsgProcUptr = std::unique_ptr<IMessageProcessor>;
/*
* Registrator class maintains all the registrations in an
* unordered_map.
* This class owns the MessageProcessor instance inside the
* unordered_map.
*/
class Registrator
{
public:
static Registrator* instance();
// Diable other types of construction
Registrator(const Registrator&) = delete;
void operator=(const Registrator&) = delete;
public:
// TypeCombo assumed to be cheap to copy
template <typename ProcT, typename... Args>
std::pair<bool, IMsgProcUptr> register_proc(TypeCombo typ, Args&&... args)
{
auto proc = std::make_unique<ProcT>(std::forward<Args>(args)...);
bool ok;
{
std::lock_guard<std::mutex> _(lock_);
std::tie(std::ignore, ok) = registrations_.insert(std::make_pair(typ, std::move(proc)));
}
return (ok == true) ? std::make_pair(true, nullptr) :
// Return the heap allocated instance back
// to the caller if the insert failed.
// The caller now owns the Processor
std::make_pair(false, std::move(proc));
}
// Get the processor corresponding to TypeCombo
// IMessageProcessor passed is non-owning pointer
// i.e the caller SHOULD not delete it or own it
std::pair<bool, IMessageProcessor*> processor(TypeCombo typ)
{
std::lock_guard<std::mutex> _(lock_);
auto fitr = registrations_.find(typ);
if (fitr == registrations_.end()) {
return std::make_pair(false, nullptr);
}
return std::make_pair(true, fitr->second.get());
}
// TypeCombo assumed to be cheap to copy
bool is_type_used(TypeCombo typ)
{
std::lock_guard<std::mutex> _(lock_);
return registrations_.find(typ) != registrations_.end();
}
bool deregister_proc(TypeCombo typ)
{
std::lock_guard<std::mutex> _(lock_);
return registrations_.erase(typ) == 1;
}
private:
Registrator() = default;
private:
std::mutex lock_;
/*
* Should be replaced with a concurrent map if at all this
* data structure is the main contention point (which I find
* very unlikely).
*/
struct HashTypeCombo
{
public:
std::size_t operator()(const TypeCombo& typ) const noexcept
{
return std::hash<decltype(typ.first)>()(typ.first) ^
std::hash<decltype(typ.second)>()(typ.second);
}
};
std::unordered_map<TypeCombo, IMsgProcUptr, HashTypeCombo> registrations_;
};
Registrator* Registrator::instance()
{
static Registrator inst;
return &inst;
/*
* OR some other DCLP based instance creation
* if lifetime or creation of static is an issue
*/
}
// Define some message processors
class AMsgProcessor final : public IMessageProcessor
{
public:
class AMsg final : public Message
{
public:
void populate() override {
std::cout << "Working on AMsg\n";
}
AMsg() = default;
~AMsg() = default;
};
Message* create() override
{
std::unique_ptr<AMsg> ptr(new AMsg);
return ptr.release();
}
void handle_message(Message* msg) override
{
assert (msg);
auto my_msg = static_cast<AMsg*>(msg);
//.... process my_msg ?
//.. probably being called in some other thread
// Who owns the msg ??
(void)my_msg; // only for suppressing warning
delete my_msg;
return;
}
~AMsgProcessor();
};
AMsgProcessor::~AMsgProcessor()
{
}
class BMsgProcessor final : public IMessageProcessor
{
public:
class BMsg final : public Message
{
public:
void populate() override {
std::cout << "Working on BMsg\n";
}
BMsg() = default;
~BMsg() = default;
};
Message* create() override
{
std::unique_ptr<BMsg> ptr(new BMsg);
return ptr.release();
}
void handle_message(Message* msg) override
{
assert (msg);
auto my_msg = static_cast<BMsg*>(msg);
//.... process my_msg ?
//.. probably being called in some other thread
//Who owns the msg ??
(void)my_msg; // only for suppressing warning
delete my_msg;
return;
}
~BMsgProcessor();
};
BMsgProcessor::~BMsgProcessor()
{
}
TypeCombo read_from_network()
{
return {1, 2};
}
struct ParsedData {
};
Message* populate_message(Message* msg, ParsedData& pdata)
{
// Do something with the message
// Calling a dummy populate method now
msg->populate();
(void)pdata;
return msg;
}
void simulate()
{
TypeCombo typ = read_from_network();
bool ok;
IMessageProcessor* proc = nullptr;
std::tie(ok, proc) = Registrator::instance()->processor(typ);
if (!ok) {
std::cerr << "FATAL!!!" << std::endl;
return;
}
ParsedData parsed_data;
//..... populate parsed_data here ....
proc->handle_message(populate_message(proc->create(), parsed_data));
return;
}
int main() {
/*
* TODO: Not making use or checking the return types after calling register
* its a must in production code!!
*/
// Register AMsgProcessor
Registrator::instance()->register_proc<AMsgProcessor>(std::make_pair(1, 1));
Registrator::instance()->register_proc<BMsgProcessor>(std::make_pair(1, 2));
simulate();
return 0;
}
UPDATE 1
The major source of confusion here seems to be because the architecture of the even system is unknown.
Any self respecting event system architecture would look something like below:
A pool of threads polling on the socket descriptors.
A pool of threads for handling timer related events.
Comparatively small number (depends on application) of threads to do long blocking jobs.
So, in your case:
You will get network event on the thread doing epoll_wait or select or poll.
Read the packet completely and get the processor using Registrator::get_processor call.
NOTE: get_processor call can be made without any locking if one can guarantee that the underlying unordered_map does not get modified i.e no new inserts would be made once we start receiving events.
Using the obtained processor we can get the Message and populate it.
Now, this is the part that I am not that sure of how you want it to be. At this point, we have the processor on which you can call handle_message either from the current thread i.e the thread which is doing epoll_wait or dispatch it to another thread by posting the job (Processor and Message) to that threads receiving queue.

Getting wrong output from boost lock free spsc queue

I am trying to implement lock free queue of user defined data type using boost library, but I am getting wrong result.
Please help me out where I am doing wrong.
#include <boost/lockfree/spsc_queue.hpp>
#include <thread>
#include <iostream>
#include <string.h>
#include <time.h>
class Queue
{
private:
unsigned char *m_data;
int m_len;
public:
Queue(unsigned char *data,int len);
Queue(const Queue &obj);
~Queue();
Queue & operator =(const Queue &obj);
unsigned char *getdata()
{
return m_data;
}
int getint()
{
return m_len;
}
};
Queue::Queue(unsigned char* data, int len)
{
m_len=len;
m_data=new unsigned char[m_len];
memcpy(m_data,data,m_len);
}
Queue::Queue(const Queue& obj)
{
m_len= obj.m_len;
m_data=new unsigned char[m_len];
memcpy(m_data,(unsigned char *)obj.m_data,m_len);
}
Queue::~Queue()
{
delete[] m_data;
m_len=0;
}
Queue & Queue::operator =(const Queue &obj)
{
if(this != &obj)
{
m_len=obj.m_len;
m_data=new unsigned char[m_len];
memcpy(m_data,(unsigned char *)obj.m_data,m_len);
}
return *this;
}
boost::lockfree::spsc_queue<Queue*> q(10);
void produce()
{
int i=0;
unsigned char* data=(unsigned char *)malloc(10);
memset(data,1,9);
Queue obj(data,10);
Queue *pqueue=&obj;
printf("%d\n",pqueue->getint());
q.push(pqueue);
}
void consume()
{
Queue *obj;
q.pop(&obj);
printf("%d\n",obj->getint());
}
int main(int argc, char** argv) {
// std::thread t1{produce};
// std::thread t2{consume};
//
// t1.join();
// t2.join();
produce();
consume();
return 0;
}
As per boost::lockfree::queue requirements I created following in class.
Copy Constructor
Assignment Operator
Destructor
Please let me know if anything other requires.
Thanks.
You're using malloc in C++.
You die.
You have 2 lives left.
Seriously, don't do that. Especially since using it with delete[] is clear cut Undefined Behaviour.
Sadly you lose another life here:
Queue obj(data,10);
Queue *pqueue=&obj;
q.push(pqueue);
You store a pointer to a local. More Undefined Behaviour
You have 1 life left.
Last life at
q.pop(&obj);
You pop using an iterator. It will be treated as an output iterator.
You get a return that indicates the number of elements popped, and items
will be written to &obj[0], &obj[1], &obj[2], etc.
Guess what? Undefined Behaviour.
See also: Boost spsc queue segfault
You died.
You're already dead. But you forsake your afterlife with
printf("%d\n",obj->getint());
Since pop might not have popped anything (the queue may have been empty), this in itself is Undefined Behaviour.
The funny part is, you talk about all these constructor requirements but you store pointers in the lockfree queue...?! Just write it:
typedef std::vector<unsigned char> Data;
class Queue {
private:
Data m_data;
public:
Queue(Data data) : m_data(std::move(data)) {}
Queue() : m_data() {}
unsigned char const *getdata() const { return m_data.data(); }
size_t getint() const { return m_data.size(); }
};
boost::lockfree::spsc_queue<Queue> q(10);
Live On Coliru
Notes:
you need to make the consumer check the return code of pop. The push might not have happened, and lock free queues don't block.
you don't need that contraption. Just pass vectors all the way:
C++ Code
Live On Coliru
#include <boost/lockfree/spsc_queue.hpp>
#include <thread>
#include <iostream>
#include <vector>
typedef std::vector<unsigned char> Queue;
boost::lockfree::spsc_queue<Queue> q(10);
void produce() {
Queue obj(10, 1);
std::cout << __FUNCTION__ << " - " << obj.size() << "\n";
q.push(std::move(obj));
}
void consume() {
Queue obj;
while (!q.pop(obj)) { }
std::cout << __FUNCTION__ << " - " << obj.size() << "\n";
}
int main() {
std::thread t1 {produce};
std::thread t2 {consume};
t1.join();
t2.join();
}

Side effects of global static variables

I'm writing a UDP server that currently receives data from UDP wraps it up in an object and places them into a concurrent queue. The concurrent queue is the implementation provided here: http://www.justsoftwaresolutions.co.uk/threading/implementing-a-thread-safe-queue-using-condition-variables.html
A pool of worker threads pull data out of the queue for processing.
The queue is defined globally as:
static concurrent_queue<boost::shared_ptr<Msg> > g_work_queue_;
Now the problem I'm having is that if I simply write a function to produce data and insert it into the queue and create some consumer threads to pull them out it works fine.
But the moment I add my UDP based producer the worker threads stop being notified of the arrival of data on the queue.
I've tracked the issue down to the end of the push function in concurrent_queue.
Specifically the line: the_condition_variable.notify_one();
Does not return when using my network code.
So the problem is related to the way I've written the networking code.
Here is what it looks like.
enum
{
MAX_LENGTH = 1500
};
class Msg
{
public:
Msg()
{
static int i = 0;
i_ = i++;
printf("Construct ObbsMsg: %d\n", i_);
}
~Msg()
{
printf("Destruct ObbsMsg: %d\n", i_);
}
const char* toString() { return data_; }
private:
friend class server;
udp::endpoint sender_endpoint_;
char data_[MAX_LENGTH];
int i_;
};
class server
{
public:
server::server(boost::asio::io_service& io_service)
: io_service_(io_service),
socket_(io_service, udp::endpoint(udp::v4(), PORT))
{
waitForNextMessage();
}
void server::waitForNextMessage()
{
printf("Waiting for next msg\n");
next_msg_.reset(new Msg());
socket_.async_receive_from(
boost::asio::buffer(next_msg_->data_, MAX_LENGTH), sender_endpoint_,
boost::bind(&server::handleReceiveFrom, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void server::handleReceiveFrom(const boost::system::error_code& error, size_t bytes_recvd)
{
if (!error && bytes_recvd > 0) {
printf("got data: %s. Adding to work queue\n", next_msg_->toString());
g_work_queue.push(next_msg_); // Add received msg to work queue
waitForNextMessage();
} else {
waitForNextMessage();
}
}
private:
boost::asio::io_service& io_service_;
udp::socket socket_;
udp::endpoint sender_endpoint_;
boost::shared_ptr<Msg> next_msg_;
}
int main(int argc, char* argv[])
{
try{
boost::asio::io_service io_service;
server s(io_service);
io_service.run();
catch(std::exception& e){
std::err << "Exception: " << e.what() << std::endl;
}
return 0;
}
Now I've found that if handle_receive_from is able to return then notify_one() in concurrent_queue returns. So I think it's because I have a recursive loop.
So what's the correct way to start listening for new data? and is the async udp server example flawed as I based it off what they were already doing.
EDIT: Ok the issue just got even weirder.
What I haven't mentioned here is that I have a class called processor.
Processor looks like this:
class processor
{
public:
processor::processor(int thread_pool_size) :
thread_pool_size_(thread_pool_size) { }
void start()
{
boost::thread_group threads;
for (std::size_t i = 0; i < thread_pool_size_; ++i){
threads.create_thread(boost::bind(&ObbsServer::worker, this));
}
}
void worker()
{
while (true){
boost::shared_ptr<ObbsMsg> msg;
g_work_queue.wait_and_pop(msg);
printf("Got msg: %s\n", msg->toString());
}
}
private:
int thread_pool_size_;
};
Now it seems that if I extract the worker function out on it's own and start the threads
from main. it works! Can someone explain why a thread functions as I would expect outside of a class, but inside it's got side effects?
EDIT2: Now it's getting even weirder still
I pulled out two functions (exactly the same).
One is called consumer, the other worker.
i.e.
void worker()
{
while (true){
boost::shared_ptr<ObbsMsg> msg;
printf("waiting for msg\n");
g_work_queue.wait_and_pop(msg);
printf("Got msg: %s\n", msg->toString());
}
}
void consumer()
{
while (true){
boost::shared_ptr<ObbsMsg> msg;
printf("waiting for msg\n");
g_work_queue.wait_and_pop(msg);
printf("Got msg: %s\n", msg->toString());
}
}
Now, consumer lives at the top of the server.cpp file. I.e. where our server code lives as well.
On the other hand, worker lives in the processor.cpp file.
Now I'm not using processor at all at the moment. The main function now looks like this:
void consumer();
void worker();
int main(int argc, char* argv[])
{
try {
boost::asio::io_service io_service;
server net(io_service);
//processor s(7);
boost::thread_group threads;
for (std::size_t i = 0; i < 7; ++i){
threads.create_thread(worker); // this doesn't work
// threads.create_thread(consumer); // THIS WORKS!?!?!?
}
// s.start();
printf("Server Started...\n");
boost::asio::io_service::work work(io_service);
io_service.run();
printf("exiting...\n");
} catch (std::exception& e) {
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
Why is it that consumer is able to receive the queued items, but worker is not.
They are identical implementations with different names.
This isn't making any sense. Any ideas?
Here is the sample output when receiving the txt "Hello World":
Output 1: not working. When calling worker function or using the processor class.
Construct ObbsMsg: 0
waiting for msg
waiting for msg
waiting for msg
waiting for msg
waiting for msg
waiting for msg
Server Started...
waiting for msg
got data: hello world. Adding to work queue
Construct ObbsMsg: 1
Output 2: works when calling the consumer function which is identical to the worker function.
Construct ObbsMsg: 0
waiting for msg
waiting for msg
waiting for msg
waiting for msg
waiting for msg
waiting for msg
Server Started...
waiting for msg
got data: hello world. Adding to work queue
Construct ObbsMsg: 1
Got msg: hello world <----- this is what I've been wanting to see!
Destruct ObbsMsg: 0
waiting for msg
To answer my own question.
It seems the problem is to do with the declaration of g_work_queue;
Declared in a header file as: static concurrent_queue< boost::shared_ptr > g_work_queue;
It seems that declaring it static is not what I want to be doing.
Apparently that creates a separate queue object for each compiled .o file and obviously
separate locks etc.
This explains why when the queue was being manipulated inside the same source file
with a consumer and producer in the same file it worked.
But when in different files it did not because the threads were actually waiting on different objects.
So I've redeclared the work queue like so.
-- workqueue.h --
extern concurrent_queue< boost::shared_ptr<Msg> > g_work_queue;
-- workqueue.cpp --
#include "workqueue.h"
concurrent_queue< boost::shared_ptr<Msg> > g_work_queue;
Doing this fixes the problem.

Resources