RabbitMQ with Websocket and Gevent - websocket

I'm looking forward to develop a realtime API for my web application using Websocket. For this I'm using RabbitMQ as the broker and My backend is based on python (gevent + websocket),and Pika/Puka as rabbitmq client.
The problem I'm facing here is that, how we can use websocket to connect with rabbitMQ. After the initial websocket connection establishment, the socket object wait for new messages from client, and in the case of rabbitMQ, we need to setup a consumer for it, so it will process the message when it receive one. We can take this in this way,
Clients are established connection with server via full-duplex websocket.
All clients should act as RabbitMQ's consumer after initial websocket handshake, so they all get updates when a client gets some message.
When new message arrives at websocket, that client will send it to RabbitMQ, so at this time this client act as publisher.
The problem is Websocket wait for a new message, and the RabbitMQ consumer wait for new message on its channel, I'm failed to link these two cases.
I'm not sure whether this is a wrong method ...
I'm unable to find a method to implement this scenario.If I'm going wrong way or is there any alternate method ?, please help me to fix this.
Thank you,
Haridas N.

I implemented similar requirement with Tornado + websocket + RabbitMQ + Pika.
I think this were already known method. Here is my git repo for this web chat application.
https://github.com/haridas/RabbitChat
It seems very difficult to the similar thing with gevent/twisted because the rabbitMQ clients couldn't supporting the event loops of gevent/twisted.
The pika has tornado adapter, so that makes this easy to setup. Pika development team working on the twisted adapter also. I hope they will release it very soon.
Thanks,
Haridas N.
http://haridas.in.

A simple solution would be to use gevent.queue.Queue instances for inter-greenlet communication.

Related

ActiveMQ - Stomp over websockets - Same Origin Policy

I have a process that runs in California that wants to talk to a process in New York, using Stomp over Websockets.
Also note that my process is not a web app, but I implemented a stomp over websocket client in C++, in order to connect things up to my backend. Maybe this was or wasn't a good idea. So, I want my client to talk to the server and subscribe, where their client pushed messages.
I was implementing my own server when I saw that ApacheMQ supported Stomp over Websockets. So, I started reading the docs.
It says with the last line under 'configuration' at
http://activemq.apache.org/websockets :
One thing worth noting is that web sockets (just as Ajax) implements ? > the same origin policy, so you can access only brokers running on the > same host as the web application running the client.
it says it again in several related searches such as http://sensatic.net/activemq/activemq-54-stomp-over-web-sockets.html
Is this a limitation of the server or the web client?
With that limitation, if I understand right, the server is not going to accept websocket connections from a client, of any kind, that is not on the same machine?
I am not sure I see the point of that...
If that is indeed its meaning, then how do I get around it in order to implement my scenario?
I've not found that bit of documentation you are referring to but from what I know of the STOMP implementation on the broker this seems incorrect. There shouldn't be any limit to the transport connector accepting connect requests from an outside host by default and I don't think the browser treats the websocket requests the same as it does other things like an Ajax case in terms of the same origin policy.
This probably a case that is best checked by actually trying it to see if it works, I've connected just fine from outside the same host using AMQP over websockets on ActiveMQ so I'd guess the STOMP stack should also work fine.

Spring websockets + Amazon MQ limitations

We want to use spring websockets + STOMP + amazon MQ as a full featured message broker. We were trying to do benchmarking, to find out how many client websocket connections single tomcat node can handle. But it appears that we hit amazonMQ connection limit first. As per the aws documentation, amazonMQ has a limit of 1000 connections per node (as far as I understand we can ask support to increase the limit, but I doubt that it can be increased big time). So my questions is:
1) Am I correct in assuming that for every websocket connection from client to spring/tomcat server, a corresponding connection being opened from server to broker? Is this correct behavior or we're doning something wrong here/missing something?
2) What can be done here? I mean I don't think this is a good idea to create broker node per evry 1000 users..
According to https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/messaging/simp/stomp/StompBrokerRelayMessageHandler.html your are doing everything right, and it is documented behavior.
Quote from javadoc:
For each new CONNECT message, an independent TCP connection to the broker is opened and used exclusively for all messages from the client that originated the CONNECT message. Messages from the same client are identified through the session id message header. Reversely, when the STOMP broker sends messages back on the TCP connection, those messages are enriched with the session id of the client and sent back downstream through the MessageChannel provided to the constructor.
As for a fix, you can write your own message broker relay, with tcp connection pooling.

Connect/disconnect from ActiveMQ topic on camel websocket connection/disconnection

I've got the following camel route which listens for messages on an ActiveMQ topic and immediately sends them to all connected web socket clients. This is working fine, but the connection to the topic is made as soon as the route builder is initialised.
from("activemq:topic:mytopic").routeId("routeid").to("websocket://test?sendToAll=true");
What I need is to only connect to the topic when one or more clients are connected to the web socket. Once there are no more connections I want to stop listening on the topic. Is this possible?
According to me there is no proper way to do this. The only way this can be achieved is override Jetty WebSocket code. Once you override Jetty Websocket code you get the flexibility to write your own custom code in open and close websocket.
Maintain a List for all websocket clients in open websocket. Check for close websocket and remove it from the list to know how many are connected or disconnected. Or keep a counter on open and close websocket.
Once all websocket clients get closed suspend the route so that your messages stay in the topic or queue.
If any client gets connected to websocket, resume the route so that the messages reach the particular client connected.

Using ZeroMQ to send replies to specific clients and queue if client disconnects

I'm new to ZeroMQ and trying to figure out a design issue. My scenario is that I have one or more clients sending requests to a single server. The server will process the requests, do some stuff, and send a reply to the client. There are two conditions:
The replies must go to the clients that sent the request.
If the client disconnects, the server should queue messages for a period of time so that if the client reconnects, it can receive the messages it missed.
I am having a difficult time figuring out the simplest way to implement this.
Things I've tried:
PUB/SUB - I could tag replies with topics to ensure only the subscribers that sent their request (with their topic as their identifier) would receive the correct reply. This takes care of the routing issue, but since the publisher is unaware of the subscribers, it knows nothing about clients that disconnect.
PUSH/PULL - Seems to be able to handle the message queuing issue, but looks like it won't support my plan of having messages sent to specific clients (based on their ID, for example).
ROUTER/DEALER - Design seemed like the solution to both, but all of the examples seem pretty complex.
My thinking right now is continuing with PUB/SUB, try to implement some sort of heartbeat on the client end (allowing the server to detect the client's presence), and when the client no longer sends a heartbeat, it will stop sending messages tagged with its topic. But that seems sub-optimal and would also involve another socket.
Are there any ideas or suggestions on any other ways I might go about implementing this? Any info would be greatly appreciated. I'm working in Python but any language is fine.
To prepare the best proposition for your solution, more data about your application requirements. I have made a little research about your conditions and connnect it with my experience about ZMQ, here I present two possibilities:
1) PUSH/PULL pattern in two direction, bigger impact on scalability, but messages from server will be cached.
Server has one PULL socket to register each client and get all messages from clients. Each message should have client ID to for server knowledge where send response.
For each client - server create PUSH socket to send responses. Socket configuration was sent in register message. You can use also REQ/REP pattern for register clients (assign socket number).
Each client has own PULL socket, which configuration was sent to server in register message.
It means that server with three clients required to (example port numbers in []):
server: 1 x PULL[5555] socket, 3 x PUSH[5560,5561,5562] sockets (+ optional 1 X REQ[5556] socket for registrations, but I think it depends how you prepare client identity)
client: 1 x PUSH[5555] socket, 1 x PULL[5560|5561|5562] (one per client) (+ optional 1 X REP[5556])
You have to connect server to multiple client sockets to send responses but if client disconnects, messages will not lost. Client will get their own messages when it reconnect to their PULL socket. The disadvantage is requirements of creating few PUSH sockets on server side (number of clients).
2) PUB/SUB + PUSH/PULL or REQ/REP, static cocket configuration on server side (only 2), but server has to prepare some mechanism for retransmit or cache messages.
Server create PUB socket and PULL or REQ. Client register it identity by PULL or REQ socket. server will publish all messages to client with this identity as filter. Server use monitor() function on PUB socket to count number of connected and disconnected clients (actions: 'accept' and 'disconnect'). After 'disconnect' action server publish message to all clients to register again. For clients which not re-register, server stop publish messages.
Client create SUB socket and PUSH or REQ to register and send requests.
This solution requires maybe some cache on server side. Client could confirm each message after get it from SUB socket. It is more complicated and have to be connected with your requirement. If you just would like to know that client lost message. Client could send timestamps of last message received from server during registration. If you need guarantee that clients get all messages, you need some cache implementation. Maybe other process which subscribe all messages and delete each confirmed by client.
In this solution server with three clients required to (example port numbers in []):
server: 1 x PUB[5555] socket, 1 x REP or PULL[5560] socket + monitoring PUB socket
client: 1 x SUB[5555] socket and own identity for filter, 1 x REQ or PUSH[5560] socket
About monitoring you could read here: https://github.com/JustinTulloss/zeromq.node#monitoring (NodeJS implementation, but Python will be similar)
I think about other patterns, but I am not sure that ROUTER/DEALER or REQ/REP will cover your requirements. You should read more about patterns, because each of it is better for some solutions. Look here:
official ZMQ guide (a lot of examples and pictures)
easy ROUTER/DEALER example: http://blog.scottlogic.com/2015/03/20/ZeroMQ-Quick-Intro.html

PubSub: Recommended way to save messages using Autobahn Python/WAMP

I am using Autobahn to broadcast messages to subscribed clients. However, when a client is NOT connected to the Internet, it is still necessary that they receive the messages when they reconnect. Will I need to use something like RabbitMQ to accomplish this or can Autobahn handle this natively?
AutobahnPython does not persist messages. Retrieval of message history is an upcoming feature in WAMPv2, and a broker with message persistence will be available as part of Crossbar.io.
Disclosure: I am original author of Autobahn, WAMP and Crossbar.io, and work for Tavendo.

Resources