When 2 servers connected to same socket redis adapter. Is both of them get messages from the client same time? - socket.io

I have two servers server-a and server-b.
For using socket.io usually, the two servers are using redis adapter. Then the client can connect to server-a or server-b.
Now the question is: If the client is connected to server-a and emit a message. Is server-b have an option to get the message?
The client code:
io.emit('sendMessage',myMessage)
The Server-a Code:
io.on('sendMessage',function(){
console.log('Server A got the message')
}
The Server-a Code:
io.on('sendMessage',function(){
console.log('Server B got the message')
}
The client is connected only to server-a. server-a & server-b are using the same redis adapter.
The question is: When client emit a message, is server-b will get it? (Server-B is only connected to the same redis)
What I want to do: I have several servers that should do an action, based on client request. When client request something, all the servers needs to start works. I thought to do with socket.io, and to keep one connection between the client and on of the servers.
All the servers will use socket.io to get the same message from the client.

If you are using the redis adapter properly with all your servers, then when you do something like:
io.emit('sendMessage',myMessage)
from any one of your servers, then that message will end up being sent to all the clients connected to all your servers. What happens internally is the message is sent to a redis channel which all the servers are listening to. When each server gets the message, they then broadcast to all their users but these last steps are transparently handled for you by the redis adapter and redis store.
So, io.emit() is used to send to all connected clients (which uses all the servers in order to carry out the broadcast). It is not used to broadcast the same message directly to all your servers so that they can each manually process that message.
To send to each of your servers, you could probably use your own custom redis publish/subscribe channel messages since each server is already connected to redis and this is something that redis is good at.
Or, you could designate one master socket.io server and have all the other servers connect to it with socket.io. Then any server could ask the central server to broadcast a message to all the other servers.

Related

Multiple clients one TCP server select()

I'm a beginner to TCP client-server architecture. I am making a client-server application in C++ I need the server to be able to accept messages from multiple clients at once. I used this IBM example as my starter for the server
server.
The client side is irrelevant but this is my source client.
The problem is with the server side it allows multiple clients to connect but not asynchronously, so the server will connect with the second client after the first one finishes. I want the server to watch for messages from both clients.
I tried to read about select() on the Internet but I couldn't find anything to make the IBM code async. How can I edit the server code to allow the clients to connect and interact at the same time?

Do I need to run Mosquitto to interact with a remote mosquitto broker

I am new to mqtt and would like to get my head around something.
I need to get messages from (subscribe to) topics from a remote mosquitto broker. The documentation for the service says I need to run a mosquitto broker on my server.
If I understand correctly then a script that uses the mqtt gem and manages to connect using something like this:
MQTT::Client.connect(conn_opts) do |c|
# The block will be called when you messages arrive to the topic
c.get('test') do |topic, message|
puts "#{topic}: #{message}"
end
end
IS a broker? Do I need to run mosquitto on my machine or can I get away with just a script and mqtt?
The doc describes the architecture and includes these lines:
The 3rd party platform needs an MQTT broker installed that will allow
communication with the different boxes on our servers. The broker on our servers will
initiate the connection and provide the credentials to allow
bidirectional communication.
The architecture I have in mind is a scheduled background process, using ruby-mqtt, that will spawn, connect with the remote mosquitto server and pull down new messages in batches before finishing. Does this sound like a reasonable approach for getting messages from a remote mosquitto broker?
I have a sneaky suspicion there is something I am not getting... any help/direction will be appreciated. Thanks!
No, you do not need a local MQTT server, you can connect directly to the remote server from your ruby script.
It is typical to keep the MQTT client running all the time, and not just download periodically using cron. Although I imagine that could work, providing you are using QoS 1/2 and disable clean sessions, so that messages are retained on the remote server. Despite its name, MQTT is not a message queuing protocol, it is a publish/subscribe protocol, so it is possible at the remote server will not allow you to build up a large pool of messages.
It may however be desirable to have a local MQTT server (eg mosquitto):
* You local MQTT server could deal with the storing of messages to disk until ruby is ready for them
* It allows multiple local clients to receive the same message without the remote server having to send it over the network multiple times
* Multiple local clients can send messages to each other, even when the remote network is down
Also be warned that ruby-mqtt doesn't support QoS 1 properly yet and also doesn't support persisting of messages or automatic reconnects, so a local mosquitto instance could solve some of those problems for you.

Using ZeroMQ to send replies to specific clients and queue if client disconnects

I'm new to ZeroMQ and trying to figure out a design issue. My scenario is that I have one or more clients sending requests to a single server. The server will process the requests, do some stuff, and send a reply to the client. There are two conditions:
The replies must go to the clients that sent the request.
If the client disconnects, the server should queue messages for a period of time so that if the client reconnects, it can receive the messages it missed.
I am having a difficult time figuring out the simplest way to implement this.
Things I've tried:
PUB/SUB - I could tag replies with topics to ensure only the subscribers that sent their request (with their topic as their identifier) would receive the correct reply. This takes care of the routing issue, but since the publisher is unaware of the subscribers, it knows nothing about clients that disconnect.
PUSH/PULL - Seems to be able to handle the message queuing issue, but looks like it won't support my plan of having messages sent to specific clients (based on their ID, for example).
ROUTER/DEALER - Design seemed like the solution to both, but all of the examples seem pretty complex.
My thinking right now is continuing with PUB/SUB, try to implement some sort of heartbeat on the client end (allowing the server to detect the client's presence), and when the client no longer sends a heartbeat, it will stop sending messages tagged with its topic. But that seems sub-optimal and would also involve another socket.
Are there any ideas or suggestions on any other ways I might go about implementing this? Any info would be greatly appreciated. I'm working in Python but any language is fine.
To prepare the best proposition for your solution, more data about your application requirements. I have made a little research about your conditions and connnect it with my experience about ZMQ, here I present two possibilities:
1) PUSH/PULL pattern in two direction, bigger impact on scalability, but messages from server will be cached.
Server has one PULL socket to register each client and get all messages from clients. Each message should have client ID to for server knowledge where send response.
For each client - server create PUSH socket to send responses. Socket configuration was sent in register message. You can use also REQ/REP pattern for register clients (assign socket number).
Each client has own PULL socket, which configuration was sent to server in register message.
It means that server with three clients required to (example port numbers in []):
server: 1 x PULL[5555] socket, 3 x PUSH[5560,5561,5562] sockets (+ optional 1 X REQ[5556] socket for registrations, but I think it depends how you prepare client identity)
client: 1 x PUSH[5555] socket, 1 x PULL[5560|5561|5562] (one per client) (+ optional 1 X REP[5556])
You have to connect server to multiple client sockets to send responses but if client disconnects, messages will not lost. Client will get their own messages when it reconnect to their PULL socket. The disadvantage is requirements of creating few PUSH sockets on server side (number of clients).
2) PUB/SUB + PUSH/PULL or REQ/REP, static cocket configuration on server side (only 2), but server has to prepare some mechanism for retransmit or cache messages.
Server create PUB socket and PULL or REQ. Client register it identity by PULL or REQ socket. server will publish all messages to client with this identity as filter. Server use monitor() function on PUB socket to count number of connected and disconnected clients (actions: 'accept' and 'disconnect'). After 'disconnect' action server publish message to all clients to register again. For clients which not re-register, server stop publish messages.
Client create SUB socket and PUSH or REQ to register and send requests.
This solution requires maybe some cache on server side. Client could confirm each message after get it from SUB socket. It is more complicated and have to be connected with your requirement. If you just would like to know that client lost message. Client could send timestamps of last message received from server during registration. If you need guarantee that clients get all messages, you need some cache implementation. Maybe other process which subscribe all messages and delete each confirmed by client.
In this solution server with three clients required to (example port numbers in []):
server: 1 x PUB[5555] socket, 1 x REP or PULL[5560] socket + monitoring PUB socket
client: 1 x SUB[5555] socket and own identity for filter, 1 x REQ or PUSH[5560] socket
About monitoring you could read here: https://github.com/JustinTulloss/zeromq.node#monitoring (NodeJS implementation, but Python will be similar)
I think about other patterns, but I am not sure that ROUTER/DEALER or REQ/REP will cover your requirements. You should read more about patterns, because each of it is better for some solutions. Look here:
official ZMQ guide (a lot of examples and pictures)
easy ROUTER/DEALER example: http://blog.scottlogic.com/2015/03/20/ZeroMQ-Quick-Intro.html

Erlang Pub/Sub using Redis and websockets

My goal is to create an application that I can use to manage pub/sub for various clients. The application should be able to receive new topics via an API and then accept subscribers via a websocket connection.
I have it working, but am aware the current solution has many flaws. It works currently as follows:
I have a chicago_boss app, that has a websocket endpoint for clients to connect to, once the client connects, I add the Pid for that Websocket connection to a list in Redis.
Client connects to "ws://localhost:8001/websocket/game_notifications"
The Pid for that Websocket connection is added to Redis using LPUSH game_notifications_pids "<0.201.0>".
3.The last 10 messages in Redis for game_notifications are sent to the websocket Pid
A new message is posted to "/game_notifications/create"
Message is added to redis using LPUSH game_notifications "new message"
All Pids in Redis with key game_notifications_pids are sent this new message
On closing of the websocket the Pid is deleted from the Redis list
Please let me know what problems people see with this setup? Thanks!

Does websocket only broadcasts the data to all clients connected instead of sending to a particular client?

I am new to Websockets. While reading about websockets, I am not been able to find answers to some of my doubts. I would like if someone clarifies it.
Does websocket only broadcasts the data to all clients connected instead of sending to a particular client? Whatever example (mainly chat apps) I tried they sends data to all the clients. Is it possible to alter this?
How it works on clients located on NAT (behind router).
Since client server connection will always remain open, how will it affect server performance for large number of connections?
Since I want all my clients to get real time updates, it is required to connect all my clients to server, so how should I handele the client connection limit?
NOTE:- My client is not a Web browser but a desktop application.
No, websocket is not only for broadcasting. You send messages to specific clients, when you broadcast you just send the same message to all connected clients, but you can send different messages to different clients, for example a game session.
The clients connect to the server and initialise the connections, so NAT is not a problem.
It's good to use a scalable server, e.g. an event driven server (e.g. Node.js) that doesn't use a seperate thread for each connection, or an erlang server with lightweight processes (a good choice for a game server).
This should not be a problem if you use a good server OS (e.g. Linux), but may be a limitation if your server uses a desktop version of Windows (e.g. may be limited to 200 connections).

Resources