If I have an app (let's Express.js app) with a web socket (socket.io) and I want to send message to a client from a different server app, what is the best way to go about that.
Let's assume that both apps are on a public cloud and running on separate containers or VMs. What's the best way to ensure that the message is sent to the right web socket app instance that holds the connection to the client?
You can use redis to ensure that client will get the message no matter which instance of app is sending the message.
But if your other app is a completely different app and does not start a socket server, You can still use socket.io emitter (along with redis adaptor) to send messages to clients without creating another socket server.
Related
Can frontend directly subscribe to redis pub sub for getting messages. Most of the blogs in internet says client has to interact with backend using web socket and web socket service will communicate with redis. Can frontend directly subscribe with redis an get the updates without using web sockets.
We are trying to make a dashboard on which graphs refreshes to show correct metrics real time. Will that work or this design has any cons?
The browser (the frontend) is stateless by nature (HTTP is stateless). The instance of the (Javascript) code that "subscribes" to something effectively goes away after a page reloads. Web Sockets give you a persistent full-duplex communication channel between the browser and the server.
Before Web Sockets (and Server-Sent Events), you had to poll the server, a.k.a. check for messages for your instance/user/etc. in a loop that eats up a lot of CPU cycles. So, yes, you need Web Sockets or SSE to do async messaging efficiently on a browser.
I need to add support for instant messages or reminders to my web application. I was reading that this could be accomplished with websockets.
The idea is that during the time the web app is been used, it could receive messages originated (not as a request response) from the server. For example, the server application might want to remind the user about and unpaid service.
As I understand, when the web app starts it connects to the websocket server through a standard HTTP Request call to announce itself as a client. My question is:
"If I have hundreds of clients connected at the same time, how do I call one in particular?"
Do I need to store every websocket object in an array or something so I can use it to send a message when it is required?
What would be the right approach?
Thanks.
We are currently implementing a simple chat app that allows users to create conversations and exchange messages.
Our basic setup involves AngularJS on the front-end and SignalR hub on the back end. It works like this:
Client app opens a Websockets connection to our real-time service (based on SignalR) and subscribes to chat updates
User starts sending messages. For each new message, client app calls HTTP API to send it
The API stores the message in the database and notifies our real-time service that there is a new message
Real-time service pushes the message via Websockets to subscribed Clients
However, we noticed that opening up so many HTTP connections for each new message may not be a good idea, so we were wondering if Websockets should be used to both send and receive messages?
The new setup would look like this:
Client app opens a Websockets connection with real-time service
User starts sending messages. Client app pushes the messages to real-time service using Websockets
Real-time service picks up the message, notifies our persistence service it needs to be stored, then delivers the message to other subscribed Clients
Persistence service stores the message
Which of these options is more typical when setting up an efficient and performant chat system? Thanks!
You don't need a different http or Web API to persist message. Persist it in the hub method that is broadcasting the message. You can use async methods in the hub, create async tasks to save the message.
Using a different persistence API then calling signalr to broadcase isn't efficient, and why dublicate all the efforts?
I'm building an HTTP -> IRC proxy, it receives messages via an HTTP request and should then connect to an IRC server and post them to a channel (chat room).
This is all fairly straightforward, the one issue I have is that a connection to an IRC server is a persistent socket that should ideally be kept open for a reasonable period of time - unlike HTTP requests where a socket is opened and closed for each request (not always true I know). The implication of this is that a message bound for the same IRC server/room must always be sent via the same process (the one that holds a connection to the IRC server).
So I basically need to receive the HTTP request on my web processes, and then have them figure out which specific worker process has an open connection to the IRC server and route the message to that process.
I would prefer to avoid the complexity of a message queue within the IRC proxy app, as we already have one sitting in front of it that sends it the HTTP requests in the first place.
With that in mind my ideal solution is to have a shared datastore between the web and worker processes, and to have the worker processes maintain a table of all the IRC servers they're connected to. When a web process receives an HTTP request it could then look up the table to figure out if there is already a worker with a connection the the required IRC server and forward the message to that, or if there is no existing connection it could effectively act as a load balancer and pick an appropriate worker to forward the message to so it can establish and hold a connection to the IRC server.
Now to do this it would require my worker processes to be able to start an HTTP server and listen for requests from the web processes. On Heroku I know only web processes are added to the public facing "routing mesh" which is fine, what I would like to know is is it possible to send HTTP requests between a web and worker process internally within Herokus network (outside of the "routing mesh").
I will use a message queue if I must be as I said I'd like to avoid it.
Thanks!
I've created a console application that listens to a queue using WCF in the past and have no problems with that implementation.
My question:
If, instead of listening to the queue on a console application, I listen to a queue through my website, when would the message be picked up? Would it be instant, as is the case with the console app? Would the message only be received when someone requests a page on the site?
Regards.
A website is not a good host container for a MSMQ client. The reason is the app pool unloads during time of low traffic.
So effectively you are correct in that you will not consume message until the app pool is loaded.
However, that does not prevent others from sending you messages, as the queue receives the messages regardless of whether your client is loaded or not. These would then be stored until the client came back to consume them (providing the queues are durable).
A windows service would be a much more appropriate container.