I'd like to create a websocket app in Cowboy that gets its data from another Cowboy handler. Let's say that I want to combine the Echo_get example from cowboy: https://github.com/ninenines/cowboy/tree/master/examples/echo_get/src
with websocket example
https://github.com/ninenines/cowboy/tree/master/examples/websocket/src
in such a way that a GET request to Echo should send a "push" via websocket handler to the html page in that example.
What is the least complicated way to do it? Can I use "pipe" operator in some easy way ? Do I need to create and intermediate gen_something to pass messages between them? I would appreciate an answer with sample code that shows the glue code for the handlers - I do not really know where to start plumbing the two handlers together.
A websocket handler in cowboy is a long lived request process to which you can send websocket or erlang messages.
In your case, there are 2 types of processes:
websocket processes: websocket handlers with a websocket connection opened to clients.
echo processes: processes when a client access the echo handler with a parameter.
The idea is that the echo process sends an erlang message to the websocket processes, that in turn will send a message to the client.
For your echo process can send a message to your websocket processes, you need to keep a list of websocket processes to which you want to send messages. Gproc is a quite useful library for that purpose.
You can register processes to gproc pubsub with gproc_ps:subscribe/2, and send messages to the registered processes with gproc_ps:publish/3.
Cowboy websocket processes receive erlang messages with the websocket_info/3 function.
So for example, the websocket handler could be like this:
websocket_init(_, Req, _Opts) ->
...
% l is for local process registration
% echo is the name of the event you register the process to
gproc_ps:subscribe(l, echo),
...
websocket_info({gproc_ps_event, echo, Echo}, Req, State) ->
% sending the Echo to the client
{reply, {text, Echo}, Req, State};
And the echo handler like this:
echo(<<"GET">>, Echo, Req) ->
% sending the echo message to the websockets handlers
gproc_ps:publish(l, echo, Echo),
cowboy_req:reply(200, [
{<<"content-type">>, <<"text/plain; charset=utf-8">>}
], Echo, Req);
Related
I'm implementing a distributed system for a project and am a bit confused as to how I should properly implement the Req/Res pattern. Basically I have a few endpoints that will send a request to a client for processing tasks and responding.
So basically:
Incoming request is received
The endpoint opens a req and res socket type with the broker
Broker receives the request, proxies it to an available worker
Worker responds and the endpoint receives the processed value, reports it back via the endpoint.
I've found a decent load balance broker script here: http://zguide.zeromq.org/js:lbbroker. There's also an async client/server pattern I'm interested in implementing: http://zguide.zeromq.org/js:asyncsrv which I might adapt into a load balanced implementation.
My question is perhaps a bit simplistic but, would each endpoint open a new socket on EVERY request or maintain and open socket for every request? That means there would be n connections for every request made to the endpoint.
You'd keep the sockets open, there's no need to close them after each request. And there'd be a single socket one every endpoint (client and server). At the server end you read a request from the socket, and write your response back to the socket; zmq takes care of ensuring that the response goes back from the right client.
I was working with different patterns in zeromq in my project and right now i am using req/rep(later will shift to dealer/router) and pub/sub . The client sends messages to the server and the server publishes this information to other clients who have subscribed.
To use multiple sockets i followed the suggestions on this thread
Combining pub/sub with req/rep in zeromq and used zmq_poll . My server polls on req socket and pub socket.
While writing the code and while reading the above post i guessed that my pub socket will never get polledin and that's what i am observing now when i run the program. Only my request is polled in and publish is not happening at all.
If i don't use polling it works fine i.e as soon as the server gets the message i publish it.
So i am unclear on how polling will be useful in this pattern and how i can use it ?
You probably don't need to poll the pub socket. You certainly don't need to poll in on it - because that can never be triggered (pub sockets are send only).
The polling pattern might be useful in the case where you want to poll for "ready to send" on the req and the pub socket, allowing you to multiplex those channels. This will be particularly useful if/when you move to using a dealer/router.
The reason for that is that replacing req with a dealer (e.g.) can allow you to send multiple messages before receiving responses. Polling for inward and outbound messages will allow you to make maximum advantage of that.
I m currently using zmq with python. Server is using REP socket.
Do I have a way, when recv a message, to know who send it ? If a receive 2 messages, I just need to know if they come from the same user or not, so an uid for example would be enough.
It looks like you want to implement async request handling on the server side: you let the server accept requests, process them asynchronously, and send the responses back to clients whenever the response data is available for each request. Now of course: how would you know, after you're done processing a request, which client to send it back to?
With simple REP sockets, ZMQ makes sure you won't run into this kind of problem by enforcing a recv() -> send(), recv() -> send() sequentiality. In other words, after you do a recv() on a REP socket, you must do a send() before recv()ing from it again. The response will be sent back to the client you got the message from, and there's no doubt about client's address because it's only one client at a time.
But this doesn't really help when you want to parallelize the request handling, does it? There are many cases when the behavior of REP is too restrictive, and that's exactly what Multipart messages and ROUTER (or XREP) sockets are for. XREP breaks the recv() -> send() lockstep of REP, but that causes a problem as we saw earlier - how do you know which client to send the reply back to, if multiple clients are connected? In order to make this work, XREP in ZMQ adds a message part to the front of a message, like an envelope, that includes the identity of the connection that it recv()'d the request from.
There's a whole chapter in the ZMQ Guide about the advanced Request-Reply patterns. You can also find an example for handling async requests here and a good short explanation of the ZMQ connection handling here.
Reading http://zguide.zeromq.org/page%3aall#Transient-vs-Durable-Sockets, you can only get the identity of the socket you're working with... not the socket of any peers you're connected to.
This being said, just build the sender information into the message. This should be trivial to do (either with a UUID or specific name per client).
How can a client both subscribe and listen to replies with zeromq?
That is, on the client side I'd like to run a loop which only receives messages and selectively sends requests, and on the server side I'd like to publish most of the time, but to sometimes receive requests as well.
It looks like I'll have to have two different sockets - one for each mode of communication. Is it possible to avoid that and on the server side receive "request notifications" from the socket on a zeromq callback thread while pushing messages to the socket in my own thread?
I am awfully new to ZeroMQ, so I'm not sure if what you want is considered best-practice or not. However, a solution using multiple sockets is pretty simple using zmq_poll.
The basic idea would be to have both client and server:
open a socket for pub/sub
open a socket for req/rep
multiplex sends and receives between the two sockets in a loop using zmq_poll in an infinite loop
process req/rep and pub/sub events within the loop as they occur
Using zmq_poll in this manner with multiple sockets is nice because it avoids threads altogether. The 0MQ guide has a good example here. Note that in that example, they use a timeout of -1 in zmq_poll, which causes it to block until at least one event occurs on any of the multiplexed sockets, but it's pretty common to use a timeout of x milliseconds or something if your loop needs to do some other work as well.
You can use 2 threads to handle the different sockets. The challenge is that if you need to share data between threads, you need to synchronize it in a safe way.
The alternative is to use the ZeroMQ Poller to select the sockets that have new data on them. The process would then use a single loop in the way bjlaub explained.
This could be accomplished using a variation/subset of the Majordomo Protocol. Here's the idea:
Your server will be a router socket, and your clients will be dealer sockets. Upon connecting to the server, the client needs to send some kind of subscription or "hello" message (of your design). The server receives that packet, but (being a router socket) also receives the ID of that client. When the server needs to send something to that client (through your design), it sends it to that ID. The client can send and receive at will, since it is a dealer socket.
I've tried to develop a GSM modem library for handling SMS built around system.io.ports.serialport.
It does'nt handle unsolicited responses very well, in particular incoming calls.
I have resorted to sending AT hangup commands for each incoming call, however the unsolicited responses still popup even while you are performing other tasks.
This makes it quite difficult to handle correctly.
You probably want a separate thread that acts as a session handler, with a message queue interface towards the rest of your app. It should wait on inputs from either your application (to initiate a session) or your modem (incoming calls). When it's rebuffing an incoming call, session initiation requests from your application can wait.