Enabling sockets.io in sails to run on multiple ports - socket.io

I have to set sails app where I can have socket.io connections on multiple ports - for example authentication on port 3999 and data synchronization on port 4999.
Any way to do so ?

I asked a similar question yesterday and it seems that yours is also similar to mine, here's what I'm going to implement.
Given that you will have multiple instances that are going to work on different ports, they won't be able to talk to each other directly and that breaks websocket functionality.
It seems that there are multiple solutions to this (sticky sessions vs using the pub/sub functionality of Redis), I chose Redis. There's a module for that called socket.io-redis. You also need emitter module, it's here.
If you choose that route, no matter how many servers (multiple servers with multiple instances) OR many instances on a single server you run your app on, it will function without a problem thanks to Redis.
At least that's what I know for now, been searching for a few days, haven't tried it yet.
Not to mention, you can use Nginx for load balancing, like below. (Copied from socket.io docs)
upstream io_nodes {
ip_hash;
server 127.0.0.1:6001;
server 127.0.0.1:6002;
server 127.0.0.1:6003;
server 127.0.0.1:6004;
}

Related

Using Laravel Events without Pusher nor Redis?

I am surprised that I need third-party services such as Pusher or Redis to have a bidirectional communication from my server to my clients through WebSockets.
What are the advantages of Pusher over Redis or simply a socker.io server aside from nginx? I see many disadvantages:
Rely on a third-party service
Pricy above 200k messages a day
Cannot work on LAN without Internet
From my understanding, they are only two possible solutions with Laravel:
Laravel Echo + Redis
Pusher
Laravel Websockets
Pusher Php Server
Is there a third alternative?
There is a clone of pusher server available on laravel, have you checked it?
https://beyondco.de/docs/laravel-websockets/getting-started/introduction
You can use this on LAN.
This runs a php-socket server on some port
like 5000
Just use Laravel Echo or Pusher SDK for mobile apps and
connect it to your server on 5000 port.
You don't have to pay anyone, it runs clone of pusher server on your
server.
The benefits of using a third party solution are different per use case and per person. However, broadly speaking there are a couple of benefits that haven't been mentioned here that are worth highlighting:
Hosted solutions do not require you to implement your own infrastructure to manage the websocket connections. This means you don't need to worry about the uptime, security, provisioning or maintenance of the infrastructure, this is done for you.
Hosted solutions scale seamlessly. As your app user base grows and your connections grow, you no longer need to provision more infrastructure and load balance/route connections.
Hosted solutions such as Pusher have dedicated support teams to help during implementation/troubleshooting.
Hosted solutions often have round the clock server monitoring, ensuring the platform is available 24/7 without the need for you to respond to server alarms in the early hours.
A lot has been said about build vs buy over the years, and there are many resources that discuss the merits of both (in fact Pusher has a resource for this). Ultimately this is not a decision that can be made for you, you will need to assess your application requirements and then look at what best fits your use case.

Laravel server to client communication

I need to implement a server to client communication. Long polling sounds like a non-optimal solution. Sockets would be great. I'm looking at this package:
https://github.com/beyondcode/laravel-websockets
My server is running on AWS Elastic Beanstalk (with a second worker environment for queue and cron).
Does anyone have experience setting up a socket connection in Elastic Beanstalk? In particular, how can I start up a socket server using ebextensions (or any way at all). Looks like I should be using supervisor for the server.
Should this server live in the worker environment? Can it? I don't know much about the moving parts here. Anything is helpful :)
Laravel comes with out of the box broadcasting tool : Laravel Echo.
And you can either use it with a local instance (by deploying Redis for example), or you can use an API or an external tool (Socket.IO, Pusher ..)
Take a look at the documentation https://laravel.com/docs/5.8/broadcasting

Socket.io what if there are two applications on the same server IP?

When we connect to socket.io, we have to define the server IP, or leave it blank if the files are hosted in the same server.
Each emit we fire, will be thrown on each socket connection.
If we have two applications on the same server,
all of the emits from app1 will be emitted in app2 and vice versa.
How to avoid this?
It depends upon what you mean by "two applications". If what you mean is two connections to the same socket.io server, then yes io.emit() is purposely designed to send to all connections to the current server.
If you have two separate socket.io servers on the same host, then those socket.io servers must be on separate ports (you can't have two actual servers on the same port) and when you io.emit() to one it will have nothing to do with the other because the io objects for the two servers will be completely different objects that are attached to completely different servers.
So, it really depends upon how you have things configured on the host. If you show your actual server-side code for your two servers, we could answer much more specifically.
If you just have one socket.io server and you're looking for ways to send a message to a group of connected sockets, you can either use namespaces or rooms. A namespace is something a client connects to. A room is something a server puts a connection into with .join(). You can then .emit() to either a namespace or a room and it will send to all sockets in that collection.

Do I *really* need RPC and NETBIOS to use transactional NServiceBus queues between local servers and Amazon EC2?

We have been trying - without success - to get transactional message queues working between local servers and our cloud servers up in Amazon EC2.
We're using NServiceBus, and have got the pub/sub examples and various other trivial apps working locally between here and EC2, but trying to spin up the components of our actual application is proving... vexatious.
As far as I can work out, to allow a local server (DYLAN-PC) to send a message transactionally via a queue on an Amazon EC2 instance, I will need to:
Enable NETBIOS name resolution (e.g. via the /etc/lmhosts file) at both ends
Allow RPC connections to be initiated from either end (so open port 135 for RPC plus various other ports)
Configure MSTDC on both systems, enabling remote connections and inbound/outbound connections
Have I missed something? In particular, the requirement to allow NetBIOS in an age where everything (including Active Directory!) runs on DNS seems particularly archaic. Are we doing something stupid trying to use MSMQ between sites like this? This is the first big project where we've tried this kind of distributed architecture, and the deployment/configuration is starting to hurt so much I'm convinced we've taken a wrong turn somewhere... a little perspective or advice would be gratefully received!
If you're look to build a geographically distributed system, where you can't arrange a VPN between these sites, you should be using the gateway capabilities of NServiceBus to communicate over alternate transports (like HTTP) between those sites.
RPC is required for reading from remote queues.
If you push to remote queues and pull from local queues, you won't be using RPC.

OpenFire, HTTP-BIND and performance

I'm looking into getting an openfire server started and setting up a strophe.js client to connect to it. My concern is that using http-bind might be costly in terms of performance versus making a straight on XMPP connection.
Can anyone tell me whether my concern is relevant or not? And if so, to what extend?
The alternative would be to use a flash proxy for all communication with OpenFire.
Thank you
BOSH is more verbose than normal XMPP, especially when idle. An idle BOSH connection might be about 2 HTTP requests per minute, while a normal connection can sit idle for hours or even days without sending a single packet (in theory, in practice you'll have pings and keepalives to combat NATs and broken firewalls).
But, the only real way to know is to benchmark. Depending on your use case, and what your clients are (will be) doing, the difference might be negligible, or not.
Basics:
Socket - zero overhead.
HTTP - requests even on IDLE session.
I doubt that you will have 1M users at once, but if you are aiming for it, then conection-less protocol like http will be much better, as I'm not sure that any OS can support that kind of connected socket volume.
Also, you can tie your OpenFires together, form a farm, and you'll have nice scalability there.
we used Openfire and BOSH with about 400 concurrent users in the same MUC Channel.
What we noticed is that Openfire leaks memory. We had about 1.5-2 GB of memory used and got constant out of memory exceptions.
Also the BOSH Implementation of Openfire is pretty bad. We switched then to punjab which was better but couldn't solve the openfire issue.
We're now using ejabberd with their built-in http-bind implementation and it scales pretty well. Load on the server having the ejabberd running is nearly 0.
At the moment we face the problem that our 5 webservers which we use to handle the chat load are sometimes overloaded at about 200 connected Users.
I'm trying to use websockets now but it seems that it doesn't work yet.
Maybe redirecting the http-bind not via Apache rewrite rule but directly on a loadbalancer/proxy would solve the issue but I couldn't find a way on how to do this atm.
Hope this helps.
I ended up using node.js and http://code.google.com/p/node-xmpp-bosh as I faced some difficulties to connect directly to Openfire via BOSH.
I have a production site running with node.js configured to proxy all BOSH requests and it works like a charm (around 50 concurrent users). The only downside so far: in the Openfire admin console you will not see the actual IP address of the connected clients, only the local server address will show up as Openfire get's the connection from the node.js server.

Resources