I am developing a web based application that needs to use websockets so the users will be able to see the updates in real time.
However there is something that disturbs me. If there are too many clients simultaniously using the application there needs to be a second server running the same application so some of the users will be redirected to it.
Then how do i make the updates which happen on one of the both servers to be seen on the other one? Do i need to program a tcp connection between both of them and message each other when some update happens???
If your users are connected to both servers (e.g. some users connected to one server and some users connected to another server) and you want to broadcast a message to all connected users from one of the servers, then YES you will need to have the server originating the messasge tell the other server to broadcast a message to all of its connected users. So, YES, the two servers will have to be connected so they can exchange these update commands.
If you had N servers (perhaps where N was even variable over time), then you would probably designate one master server that kept a connection to all the other servers. Then, when any notification was going to be sent to all connected users, a server would simply notify the master server who would then notify all the servers who would then broadcast to all their users. When each server starts up, it just connects to the one master server and that's all it has to now about.
Of course, if you only have two servers, you don't need the master server concept. Each can just connect to the other.
Related
I have WebSocket implemented in a real-time application, where connected clients get all server updates without page refresh. That's fine and it's working very well. The problem is as follows:
Lets say I use two servers (server1 and server2) to serve client requests. If a client on server1 updates the database, all clients connected to server1 will get the updates, as expected, because server1 is aware of all connected clients. However, clients connected to server2 do not get any updates because they are being served by server2 who is not aware of the database updates (the updates were done by a client on server1)!
Is there a standard way of handling this? Also assume I have many servers.
If this has been addressed before, I'd also appreciate a pointer to it. Thanks
Handling the DB values, changes should be the responsibility of each instance connected to the DB. Whereas sharing updates (requires DB change or not)across various clients should be the responsibility of the handler. For websocket usually such updates are handles by writing it to a pub/sub channel/queue such as reddis and all instances subscribed to appropriate channel. Whenever any instance wants all clients to receive an update it puts it on that queue and all the instances are able to receive and broadcast it
When we connect to socket.io, we have to define the server IP, or leave it blank if the files are hosted in the same server.
Each emit we fire, will be thrown on each socket connection.
If we have two applications on the same server,
all of the emits from app1 will be emitted in app2 and vice versa.
How to avoid this?
It depends upon what you mean by "two applications". If what you mean is two connections to the same socket.io server, then yes io.emit() is purposely designed to send to all connections to the current server.
If you have two separate socket.io servers on the same host, then those socket.io servers must be on separate ports (you can't have two actual servers on the same port) and when you io.emit() to one it will have nothing to do with the other because the io objects for the two servers will be completely different objects that are attached to completely different servers.
So, it really depends upon how you have things configured on the host. If you show your actual server-side code for your two servers, we could answer much more specifically.
If you just have one socket.io server and you're looking for ways to send a message to a group of connected sockets, you can either use namespaces or rooms. A namespace is something a client connects to. A room is something a server puts a connection into with .join(). You can then .emit() to either a namespace or a room and it will send to all sockets in that collection.
I have TCPListener server based on this source code https://gist.github.com/leandrosilva/656054#file-server-cs
I created a server on port 3340. Whenever a client connects to the server, then server waits for the new client connection. When I connect from my Chrome browser to the server, then it seems there are three clients connected (expected only one).
Why it is like that?
Most clients maintain multiple connections in parallel, including more than one connection per server endpoint.
And RFC7230 section-6.4 explains. Multiple connections are typically used to avoid the "head-of-line blocking" problem
I'm trying to figure out under what conditions I would want to implement a remote queue versus a local one for 2 endpoint applications.
Consider this scenario: App A on Server A needs to send messages to App B on Server B via MQServer1.
It seems like the simplest configuration would be to create a single local queue on MQServer1 and configure AppA to put messages to the local queue while configuring AppB to get messages from the same local queue. Both AppA and AppB would connect to the same Queue Manager but execute different commands.
What sort of circumstances would require the need to install another MQ server (e.g. MQServer2) and configure a remote queue on MQServer1 which instead sends the messages from AppA over a channel to a local queue on MQServer2 to be consumed by AppB?
I believe I understand the benefit of remote queuing but I'm not sure when it's best used over a more simpler design.
Here are some problems with what you call the simpler design that you don't have with remote queuing:-
Time Independance - Server1 has to be available all the time, whereas with a remote queue, once the messages have been moved to Server B, Server A and Server 1 don't need to be online when App B wants to get its messages.
Network Efficiency - with two client applications putting or getting from a central queue, you have two inefficient network hops, instead of one efficient channel batched network connection from Server A to Server B (no need for Server 1 in the middle)
Network Problems - No network, no messages. Whereas when they are stored locally, any that have already arrived can be processed even while the network is down. Likewise, the application putting messages is also not held up by a network problem, the messages sit on the transmit queue easy to be moved, and the application can get on with the next thing.
Of course your applications should be written so that they aren't even aware of the difference, and it's just configuration changes that switch you from one design to the other.
Here we can have separate Queue Manager for both the application.Application A will send the message on to the queue defined on local Queue Manager, which in turn transmit it to the Transmission queue via defined channels (Need to do configuration for that in the QueueManager) which in turn send it to the Local queue of the Application B.
I am new to Websockets. While reading about websockets, I am not been able to find answers to some of my doubts. I would like if someone clarifies it.
Does websocket only broadcasts the data to all clients connected instead of sending to a particular client? Whatever example (mainly chat apps) I tried they sends data to all the clients. Is it possible to alter this?
How it works on clients located on NAT (behind router).
Since client server connection will always remain open, how will it affect server performance for large number of connections?
Since I want all my clients to get real time updates, it is required to connect all my clients to server, so how should I handele the client connection limit?
NOTE:- My client is not a Web browser but a desktop application.
No, websocket is not only for broadcasting. You send messages to specific clients, when you broadcast you just send the same message to all connected clients, but you can send different messages to different clients, for example a game session.
The clients connect to the server and initialise the connections, so NAT is not a problem.
It's good to use a scalable server, e.g. an event driven server (e.g. Node.js) that doesn't use a seperate thread for each connection, or an erlang server with lightweight processes (a good choice for a game server).
This should not be a problem if you use a good server OS (e.g. Linux), but may be a limitation if your server uses a desktop version of Windows (e.g. may be limited to 200 connections).