What I'm trying to archive is to get an update if the number of connections to a boost::signal2::signal object is changeing.
To give you the whole picture: I'm writing a GUI application which displays data from a remote server. Each "window" in the application should get it's data for a specific dataset. If a dataset is to be displayed it needs to be remotely subscribed from the server. Multiple windows can display the same dataset (with different ordering or filtering). My goal is to subscribe to a specific dataset only ONCE and disconnect ones its not longer needed.
Background: HFT software, displaying marketdata (orderbooks, trades, ...)
My code so far: I got stuck once I tried to implement the "operator()".
enum UpdateCountMethod {
UP = 1,
DOWN = -1
};
/**
* \brief Connection class which holds a Slot as long as an instance of this class "survives".
*/
class Connection {
public:
Connection(const boost::function<void (int)> updateFunc, const boost::signals2::connection conn) : update(updateFunc), connection(conn) {
update(UP); //Increase counter only. Connection was already made.
}
~Connection() {
update(DOWN); //Decrease counter before disconnecting the slot.
connection.disconnect();
}
private:
const boost::function<void(int)> update; // Functor for updating the connection count.
const boost::signals2::connection connection; // Actual boost connection this object belongs to.
};
/**
* \brief This is a Signal/Slot "container" which number of connections can be tracked.
*/
template<typename Signature>
class ObservableSignal{
typedef typename boost::signals2::slot<Signature> slot_type;
public:
ObservableSignal() : count(0) {}
boost::shared_ptr<Connection> connect(const slot_type &t) {
// Create the boost signal connection and return our shared Connection object.
boost::signals2::connection conn = signal.connect(t);
return boost::shared_ptr<Connection>(new Connection(boost::bind(&ObservableSignal::updateCount, this, _1), conn));
}
// This is where I don't know anymore.
void operator() (/* Parameter depend on "Signature" */) {
signal(/* Parameter depend on "Signature" */); //Call the actual boost signal
}
private:
void updateCount(int updown) {
// TODO: Handle subscription if count is leaving or approaching 0.
count += updown;
std::cout << "Count: " << count << std::endl;
}
int count; // current count of connections to this signal
boost::signals2::signal<Signature> signal; // Actual boost signal
};
Related
Whenever I run my simulation the msg that is initial created at rdrchk1 gets stuck going between
rdrchk1 and rdrsucess1.
Here is my C++ code:
#include <string.h>
#include <omnetpp.h>
using namespace omnetpp;
class rdr : public cSimpleModule
{
protected:
// The following redefined virtual function holds the algorithm.
virtual void initialize() override;
virtual void handleMessage(cMessage *msg) override;
};
// The module class needs to be registered with OMNeT++
Define_Module(rdr);
void rdr::initialize()
{
int v1 = rand() % 100;
int v2 = rand() % 100;
int v3 = rand() % 100;
if (strcmp("rdrchk1", getName()) == 0) {
cMessage *msg = new cMessage("objectcheck");
if (v1<78|| v2 < 82 || v3 <69){
int n = 0;
send(msg, "out", n);
}
else{
int d=1;
send(msg, "out", d);
}
}
}
void rdr::handleMessage(cMessage *msg)
{
int t = 0;
send(msg, "out",t);
}
Here is my NED code:
simple rdr
{
parameters:
#display("i=block/routing");
gates:
input in[4];
output out[4];
}
//
network radr
{
#display("bgb=356,232");
submodules:
rdrchk1: rdr {
#display("p=85,67");
}
rdrfail1: rdr {
#display("p=275,133;i=block/wheelbarrow");
}
rdrsucess1: rdr {
#display("p=291,61");
}
connections allowunconnected:
rdrchk1.out[1] --> rdrfail1.in++;
rdrchk1.out[0] --> rdrsucess1.in++;
rdrchk1.in[2] <-- rdrfail1.out++;
rdrchk1.in[3] <-- rdrsucess1.out++;
}
I know it's stuck because whenever I edit my code so that it is guaranteed to go to rdrfail1 the next step it is stuck going between rdrchk1 and rdrsucess1. Can any one tell me why it is doing that and what I could do to fix it. Thank you for your time.
Here is what is happening in your model.
In initialize() of rdrchk1 a new message is created. Then that message is sent:
a. to rdrsucess1 when the condition if (v1<78|| v2 < 82 || v3 <69) is true
or
b. to rdrfail1 otherwise
If rdrsucess1 receives the message, it immediately sends that message to rdrchk1 (because in your network port with index 0 of gate out of rdrsucess1 is connected to rdrchk1).
If rdrfail1 receives the message, it immediately sends that message to rdrchk1 (because in your network port with index 0 of gate out of rdrfail1 is connected to rdrchk1).
Then rdrchk1 receives the message and sends it immediately to rdrsucess1.
Then points 2 and 4 are repeated endlessly.
The ZeroMQ documentation mentions a zmq_poll as a method for multi-plexing multiple sockets on a single thread. Is there any benefit to polling in a thread that simply consumes data from one socket? Or should I just use zmq_recv?
For example:
/* POLLING A SINGLE SOCKET */
while (true) {
zmq::poll(&items[0], 1, -1);
if (items[0].revents & ZMQ_POLLIN) {
int size = zmq_recv(receiver, msg, 255, 0);
if (size != -1) {
// do something with msg
}
}
}
vs.
/* NO POLLING AND BLOCKING RECV */
while (true) {
int size = zmq_recv(receiver, msg, 255, 0);
if (size != -1) {
// do something with msg
}
}
Is there ever a situation to prefer the version with polling, or should I only use it for multi-plexing? Does polling result in more efficient CPU usage? Does the answer depend on the rate of messages being received?
*** Editing this post to include a toy example ***
The reason for asking this question is that I have observed that I can achieve a much higher throughput on my subscriber if I do not poll (more than an order of magnitude)
#include <thread>
#include <zmq.hpp>
#include <iostream>
#include <unistd.h>
#include <chrono>
using msg_t = char[88];
using timepoint_t = std::chrono::high_resolution_clock::time_point;
using milliseconds = std::chrono::milliseconds;
using microseconds = std::chrono::microseconds;
/* Log stats about how many packets were sent/received */
class SocketStats {
public:
SocketStats(const std::string& name) : m_socketName(name), m_timePrev(now()) {}
void update() {
m_numPackets++;
timepoint_t timeNow = now();
if (duration(timeNow, m_timePrev) > m_logIntervalMs) {
uint64_t packetsPerSec = m_numPackets - m_numPacketsPrev;
std::cout << m_socketName << " : " << "processed " << (packetsPerSec) << " packets" << std::endl;
m_numPacketsPrev = m_numPackets;
m_timePrev = timeNow;
}
}
private:
timepoint_t now() { return std::chrono::steady_clock::now(); }
static milliseconds duration(timepoint_t timeNow, timepoint_t timePrev) {
return std::chrono::duration_cast<milliseconds>(timeNow - timePrev);
}
timepoint_t m_timePrev;
uint64_t m_numPackets = 0;
uint64_t m_numPacketsPrev = 0;
milliseconds m_logIntervalMs = milliseconds{1000};
const std::string m_socketName;
};
/* non-polling subscriber uses blocking receive and no poll */
void startNonPollingSubscriber(){
SocketStats subStats("NonPollingSubscriber");
zmq::context_t ctx(1);
zmq::socket_t sub(ctx, ZMQ_SUB);
sub.connect("tcp://127.0.0.1:5602");
sub.setsockopt(ZMQ_SUBSCRIBE, "", 0);
while (true) {
zmq::message_t msg;
bool success = sub.recv(&msg);
if (success) { subStats.update(); }
}
}
/* polling subscriber receives messages when available */
void startPollingSubscriber(){
SocketStats subStats("PollingSubscriber");
zmq::context_t ctx(1);
zmq::socket_t sub(ctx, ZMQ_SUB);
sub.connect("tcp://127.0.0.1:5602");
sub.setsockopt(ZMQ_SUBSCRIBE, "", 0);
zmq::pollitem_t items [] = {{static_cast<void*>(sub), 0, ZMQ_POLLIN, 0 }};
while (true) {
zmq::message_t msg;
int rc = zmq::poll (&items[0], 1, -1);
if (rc < 1) { continue; }
if (items[0].revents & ZMQ_POLLIN) {
bool success = sub.recv(&msg, ZMQ_DONTWAIT);
if (success) { subStats.update(); }
}
}
}
void startFastPublisher() {
SocketStats pubStats("FastPublisher");
zmq::context_t ctx(1);
zmq::socket_t pub(ctx, ZMQ_PUB);
pub.bind("tcp://127.0.0.1:5602");
while (true) {
msg_t mymessage;
zmq::message_t msg(sizeof(msg_t));
memcpy((char *)msg.data(), (void*)(&mymessage), sizeof(msg_t));
bool success = pub.send(&msg, ZMQ_DONTWAIT);
if (success) { pubStats.update(); }
}
}
int main() {
std::thread t_sub1(startPollingSubscriber);
sleep(1);
std::thread t_sub2(startNonPollingSubscriber);
sleep(1);
std::thread t_pub(startFastPublisher);
while(true) {
sleep(10);
}
}
Q : "Is there any benefit to polling in a thread that simply consumes data from one socket?"
Oh sure there is.
As a principal promoter of non-blocking designs, I always advocate to design zero-waiting .poll()-s before deciding on .recv()-calls.
Q : "Does polling result in more efficient CPU usage?"
A harder one, yet I love it:
This question is decidable in two distinct manners:
a) read the source-code of both the .poll()-method and the .recv()-method, as ported onto your target platform and guesstimate the costs of calling each v/s the other
b) test either of the use-cases inside your run-time ecosystem and have the hard facts micro-benchmarked in-vivo.
Either way, you see the difference.
What you cannot see ATM are the add-on costs and other impacts that appear once you try (or once you are forced to) extend the use-case so as to accomodate other properties, not included in either the former or the latter.
Here, my principal preference to use .poll() before deciding further, enables other priority-based re-ordering of actual .recv()-calls and other, higher level, decisions, that neither the source-code, nor the test could ever decide.
Do not hesitate to test first and if tests will seem to be inconclusive (on your scale of { low | ultra-low }-latency sensitivity), may deep into the source-code to see why.
Typically you will get the best performance by draining the socket before doing another poll. In other words, once poll returns, you read until read returns "no more data", at which point you call poll.
For an example, see https://github.com/nyfix/OZ/blob/4627b0364be80de4451bf1a80a26c00d0ba9310f/src/transport.c#L524
There are trade-offs with this approach (mentioned in the code comments), but if you're only reading from a single socket you can ignore these.
I have thousands of concurrent client threads with C10K synchronized request, newing a tcp connect when request comes might be time-consuming, and it cannot be scalable if the number of concurrent threads goes up.
So I thought a shared connection(tcp) pool might be a good choice for me, all connects(tcp) within the pool are activated, all requests share the pool, pull out when need to send message, push in when receiving message done.
But how can I achieve that? By the way, my server side conforms a Mulit-threading Async mode.
The piece code for connection(client-side):
class SyncTCPClient {
public:
SyncTCPClient(const std::string &raw_ip_address, unsigned short port_num) :
m_ep(asio::ip::address::from_string(raw_ip_address), port_num), m_sock(m_ios) {
m_sock.open(m_ep.protocol());
}
void connect() {
m_sock.connect(m_ep);
}
//...
private:
asio::io_service m_ios;
asio::ip::tcp::endpoint m_ep;
asio::ip::tcp::socket m_sock;
};
The piece code for connection pool(client-side):
class ConnectionPool {
private:
std::queue<std::shared_ptr<SyncTCPClient>> pool;
std::mutex mtx;
std::condition_variable no_empty;
public:
ConnectionPool(int size) {
std::cout << "Constructor for connection pool." << std::endl;
for (int i = 0; i < size; i++) {
std::shared_ptr<SyncTCPClient> cliPtr = std::make_shared<SyncTCPClient>(raw_ip_address, port_num);
cliPtr->connect();
pool.push(cliPtr);
}
}
//...
};
The piece code for connection acceptance(server-side):
std::shared_ptr<asio::ip::tcp::socket> sock = std::make_shared<asio::ip::tcp::socket>(m_ios);
m_acceptor.async_accept(*sock.get(),
[this, sock](const boost::system::error_code& error) {
onAccept(error, sock);
}
);
Much appreciated-)
Our company is rewriting most of the legacy C code in C++11. (Which also means I am a C programmer learning C++). I need advice on message handlers.
We have distributed system - Server process sends a packed message over TCP to client process.
In C code this was being done:
- parse message based on type and subtype, which are always the first 2 fields
- call a handler as handler[type](Message *msg)
- handler creates temporary struct say, tmp_struct to hold the parsed values and ..
- calls subhandler[type][subtype](tmp_struct)
There is only one handler per type/subtype.
Moving to C++11 and mutli-threaded environment. The basic idea I had was to -
1) Register a processor object for each type/subtype combination. This is
actually a vector of vectors -
vector< vector >
class MsgProcessor {
// Factory function
virtual Message *create();
virtual Handler(Message *msg)
}
This will be inherited by different message processors
class AMsgProcessor : public MsgProcessor {
Message *create() override();
handler(Message *msg);
}
2) Get the processor using a lookup into the vector of vectors.
Get the message using the overloaded create() factory function.
So that we can keep the actual message and the parsed values inside the message.
3) Now a bit of hack, This message should be send to other threads for the heavy processing. To avoid having to lookup in the vector again, added a pointer to proc inside the message.
class Message {
const MsgProcessor *proc; // set to processor,
// which we got from the first lookup
// to get factory function.
};
So other threads, will just do
Message->proc->Handler(Message *);
This looks bad, but hope, is that this will help to separate message handler from the factory. This is for the case, when multiple type/subtype wants to create same Message, but handle it differently.
I was searching about this and came across :
http://www.drdobbs.com/cpp/message-handling-without-dependencies/184429055?pgno=1
It provides a way to completely separate the message from the handler. But I was wondering if my simple scheme above will be considered an acceptable design or not. Also is this a wrong way of achieving what I want?
Efficiency, as in speed, is the most important requirement from this application. Already we are doing couple of memory Jumbs => 2 vectors + virtual function call the create the message. There are 2 deference to get to the handler, which is not good from caching point of view I guess.
Though your requirement is unclear, I think I have a design that might be what you are looking for.
Check out http://coliru.stacked-crooked.com/a/f7f9d5e7d57e6261 for the fully fledged example.
It has following components:
An interface class for Message processors IMessageProcessor.
A base class representing a Message. Message
A registration class which is essentially a singleton for storing the message processors corresponding to (Type, Subtype) pair. Registrator. It stores the mapping in a unordered_map. You can also tweak it a bit for better performance. All the exposed API's of Registrator are protected by a std::mutex.
Concrete implementations of MessageProcessor. AMsgProcessor and BMsgProcessor in this case.
simulate function to show how it all fits together.
Pasting the code here as well:
/*
* http://stackoverflow.com/questions/40230555/efficient-message-factory-and-handler-in-c
*/
#include <iostream>
#include <vector>
#include <tuple>
#include <mutex>
#include <memory>
#include <cassert>
#include <unordered_map>
class Message;
class IMessageProcessor
{
public:
virtual Message* create() = 0;
virtual void handle_message(Message*) = 0;
virtual ~IMessageProcessor() {};
};
/*
* Base message class
*/
class Message
{
public:
virtual void populate() = 0;
virtual ~Message() {};
};
using Type = int;
using SubType = int;
using TypeCombo = std::pair<Type, SubType>;
using IMsgProcUptr = std::unique_ptr<IMessageProcessor>;
/*
* Registrator class maintains all the registrations in an
* unordered_map.
* This class owns the MessageProcessor instance inside the
* unordered_map.
*/
class Registrator
{
public:
static Registrator* instance();
// Diable other types of construction
Registrator(const Registrator&) = delete;
void operator=(const Registrator&) = delete;
public:
// TypeCombo assumed to be cheap to copy
template <typename ProcT, typename... Args>
std::pair<bool, IMsgProcUptr> register_proc(TypeCombo typ, Args&&... args)
{
auto proc = std::make_unique<ProcT>(std::forward<Args>(args)...);
bool ok;
{
std::lock_guard<std::mutex> _(lock_);
std::tie(std::ignore, ok) = registrations_.insert(std::make_pair(typ, std::move(proc)));
}
return (ok == true) ? std::make_pair(true, nullptr) :
// Return the heap allocated instance back
// to the caller if the insert failed.
// The caller now owns the Processor
std::make_pair(false, std::move(proc));
}
// Get the processor corresponding to TypeCombo
// IMessageProcessor passed is non-owning pointer
// i.e the caller SHOULD not delete it or own it
std::pair<bool, IMessageProcessor*> processor(TypeCombo typ)
{
std::lock_guard<std::mutex> _(lock_);
auto fitr = registrations_.find(typ);
if (fitr == registrations_.end()) {
return std::make_pair(false, nullptr);
}
return std::make_pair(true, fitr->second.get());
}
// TypeCombo assumed to be cheap to copy
bool is_type_used(TypeCombo typ)
{
std::lock_guard<std::mutex> _(lock_);
return registrations_.find(typ) != registrations_.end();
}
bool deregister_proc(TypeCombo typ)
{
std::lock_guard<std::mutex> _(lock_);
return registrations_.erase(typ) == 1;
}
private:
Registrator() = default;
private:
std::mutex lock_;
/*
* Should be replaced with a concurrent map if at all this
* data structure is the main contention point (which I find
* very unlikely).
*/
struct HashTypeCombo
{
public:
std::size_t operator()(const TypeCombo& typ) const noexcept
{
return std::hash<decltype(typ.first)>()(typ.first) ^
std::hash<decltype(typ.second)>()(typ.second);
}
};
std::unordered_map<TypeCombo, IMsgProcUptr, HashTypeCombo> registrations_;
};
Registrator* Registrator::instance()
{
static Registrator inst;
return &inst;
/*
* OR some other DCLP based instance creation
* if lifetime or creation of static is an issue
*/
}
// Define some message processors
class AMsgProcessor final : public IMessageProcessor
{
public:
class AMsg final : public Message
{
public:
void populate() override {
std::cout << "Working on AMsg\n";
}
AMsg() = default;
~AMsg() = default;
};
Message* create() override
{
std::unique_ptr<AMsg> ptr(new AMsg);
return ptr.release();
}
void handle_message(Message* msg) override
{
assert (msg);
auto my_msg = static_cast<AMsg*>(msg);
//.... process my_msg ?
//.. probably being called in some other thread
// Who owns the msg ??
(void)my_msg; // only for suppressing warning
delete my_msg;
return;
}
~AMsgProcessor();
};
AMsgProcessor::~AMsgProcessor()
{
}
class BMsgProcessor final : public IMessageProcessor
{
public:
class BMsg final : public Message
{
public:
void populate() override {
std::cout << "Working on BMsg\n";
}
BMsg() = default;
~BMsg() = default;
};
Message* create() override
{
std::unique_ptr<BMsg> ptr(new BMsg);
return ptr.release();
}
void handle_message(Message* msg) override
{
assert (msg);
auto my_msg = static_cast<BMsg*>(msg);
//.... process my_msg ?
//.. probably being called in some other thread
//Who owns the msg ??
(void)my_msg; // only for suppressing warning
delete my_msg;
return;
}
~BMsgProcessor();
};
BMsgProcessor::~BMsgProcessor()
{
}
TypeCombo read_from_network()
{
return {1, 2};
}
struct ParsedData {
};
Message* populate_message(Message* msg, ParsedData& pdata)
{
// Do something with the message
// Calling a dummy populate method now
msg->populate();
(void)pdata;
return msg;
}
void simulate()
{
TypeCombo typ = read_from_network();
bool ok;
IMessageProcessor* proc = nullptr;
std::tie(ok, proc) = Registrator::instance()->processor(typ);
if (!ok) {
std::cerr << "FATAL!!!" << std::endl;
return;
}
ParsedData parsed_data;
//..... populate parsed_data here ....
proc->handle_message(populate_message(proc->create(), parsed_data));
return;
}
int main() {
/*
* TODO: Not making use or checking the return types after calling register
* its a must in production code!!
*/
// Register AMsgProcessor
Registrator::instance()->register_proc<AMsgProcessor>(std::make_pair(1, 1));
Registrator::instance()->register_proc<BMsgProcessor>(std::make_pair(1, 2));
simulate();
return 0;
}
UPDATE 1
The major source of confusion here seems to be because the architecture of the even system is unknown.
Any self respecting event system architecture would look something like below:
A pool of threads polling on the socket descriptors.
A pool of threads for handling timer related events.
Comparatively small number (depends on application) of threads to do long blocking jobs.
So, in your case:
You will get network event on the thread doing epoll_wait or select or poll.
Read the packet completely and get the processor using Registrator::get_processor call.
NOTE: get_processor call can be made without any locking if one can guarantee that the underlying unordered_map does not get modified i.e no new inserts would be made once we start receiving events.
Using the obtained processor we can get the Message and populate it.
Now, this is the part that I am not that sure of how you want it to be. At this point, we have the processor on which you can call handle_message either from the current thread i.e the thread which is doing epoll_wait or dispatch it to another thread by posting the job (Processor and Message) to that threads receiving queue.
I am toying around with a libwebsockets tutorial trying to make it such that, after it receives a message from a connection over a given protocol, it sends a response to all active connections implementing that protocol. I have used the function libwebsocket_callback_all_protocol but it is not doing what I think it should do from its name (I'm not quite sure what it does from the documentation).
The goal is to have two webpages open and, when info is sent from one, the result will be relayed to both. Below is my code - you'll see that libwebsocket_callback_all_protocol is called in main (which currently does nothing, I think....) :
#include <stdio.h>
#include <stdlib.h>
#include <libwebsockets.h>
#include <string.h>
static int callback_http(struct libwebsocket_context * this,
struct libwebsocket *wsi,
enum libwebsocket_callback_reasons reason, void *user,
void *in, size_t len)
{
return 0;
}
static int callback_dumb_increment(struct libwebsocket_context * this,
struct libwebsocket *wsi,
enum libwebsocket_callback_reasons reason,
void *user, void *in, size_t len)
{
switch (reason) {
case LWS_CALLBACK_ESTABLISHED: // just log message that someone is connecting
printf("connection established\n");
break;
case LWS_CALLBACK_RECEIVE: { // the funny part
// create a buffer to hold our response
// it has to have some pre and post padding. You don't need to care
// what comes there, libwebsockets will do everything for you. For more info see
// http://git.warmcat.com/cgi-bin/cgit/libwebsockets/tree/lib/libwebsockets.h#n597
unsigned char *buf = (unsigned char*) malloc(LWS_SEND_BUFFER_PRE_PADDING + len +
LWS_SEND_BUFFER_POST_PADDING);
int i;
// pointer to `void *in` holds the incomming request
// we're just going to put it in reverse order and put it in `buf` with
// correct offset. `len` holds length of the request.
for (i=0; i < len; i++) {
buf[LWS_SEND_BUFFER_PRE_PADDING + (len - 1) - i ] = ((char *) in)[i];
}
// log what we recieved and what we're going to send as a response.
// that disco syntax `%.*s` is used to print just a part of our buffer
// http://stackoverflow.com/questions/5189071/print-part-of-char-array
printf("received data: %s, replying: %.*s\n", (char *) in, (int) len,
buf + LWS_SEND_BUFFER_PRE_PADDING);
// send response
// just notice that we have to tell where exactly our response starts. That's
// why there's `buf[LWS_SEND_BUFFER_PRE_PADDING]` and how long it is.
// we know that our response has the same length as request because
// it's the same message in reverse order.
libwebsocket_write(wsi, &buf[LWS_SEND_BUFFER_PRE_PADDING], len, LWS_WRITE_TEXT);
// release memory back into the wild
free(buf);
break;
}
default:
break;
}
return 0;
}
static struct libwebsocket_protocols protocols[] = {
/* first protocol must always be HTTP handler */
{
"http-only", // name
callback_http, // callback
0, // per_session_data_size
0
},
{
"dumb-increment-protocol", // protocol name - very important!
callback_dumb_increment, // callback
0, // we don't use any per session data
0
},
{
NULL, NULL, 0, 0 /* End of list */
}
};
int main(void) {
// server url will be http://localhost:9000
int port = 9000;
const char *interface = NULL;
struct libwebsocket_context *context;
// we're not using ssl
const char *cert_path = NULL;
const char *key_path = NULL;
// no special options
int opts = 0;
// create libwebsocket context representing this server
struct lws_context_creation_info info;
memset(&info, 0, sizeof info);
info.port = port;
info.iface = interface;
info.protocols = protocols;
info.extensions = libwebsocket_get_internal_extensions();
info.ssl_cert_filepath = cert_path;
info.ssl_private_key_filepath = key_path;
info.gid = -1;
info.uid = -1;
info.options = opts;
info.user = NULL;
info.ka_time = 0;
info.ka_probes = 0;
info.ka_interval = 0;
/*context = libwebsocket_create_context(port, interface, protocols,
libwebsocket_get_internal_extensions,
cert_path, key_path, -1, -1, opts);
*/
context = libwebsocket_create_context(&info);
if (context == NULL) {
fprintf(stderr, "libwebsocket init failed\n");
return -1;
}
libwebsocket_callback_all_protocol(&protocols[1], LWS_CALLBACK_RECEIVE);
printf("starting server...\n");
// infinite loop, to end this server send SIGTERM. (CTRL+C)
while (1) {
libwebsocket_service(context, 50);
// libwebsocket_service will process all waiting events with their
// callback functions and then wait 50 ms.
// (this is a single threaded webserver and this will keep our server
// from generating load while there are not requests to process)
}
libwebsocket_context_destroy(context);
return 0;
}
I had the same problem, the libwebsocket_write on LWS_CALLBACK_ESTABLISHED generate some random segfault so using the mail list the libwebsockets developer Andy Green instructed me the correct way is to use libwebsocket_callback_on_writable_all_protocol, the file test-server/test-server.c in library source code shows sample of use.
libwebsocket_callback_on_writable_all_protocol(libwebsockets_get_protocol(wsi))
It worked very well to notify all instances, but it only call the write method in all connected instances, it do not define the data to send. You need to manage the data yourself. The sample source file test-server.c show a sample ring buffer to do it.
http://ml.libwebsockets.org/pipermail/libwebsockets/2015-January/001580.html
Hope it helps.
From what I can quickly grab from the documentation, in order to send a message to all clients, what you should do is store somewhere (in a vector, a hashmap, an array, whatever) the struct libwebsocket * wsi that you have access when your clients connect.
Then when you receive a message and want to broadcast it, simply call libwebsocket_write on all wsi * instances.
That's what I'd do, anyway.