Laravel server to client communication - laravel

I need to implement a server to client communication. Long polling sounds like a non-optimal solution. Sockets would be great. I'm looking at this package:
https://github.com/beyondcode/laravel-websockets
My server is running on AWS Elastic Beanstalk (with a second worker environment for queue and cron).
Does anyone have experience setting up a socket connection in Elastic Beanstalk? In particular, how can I start up a socket server using ebextensions (or any way at all). Looks like I should be using supervisor for the server.
Should this server live in the worker environment? Can it? I don't know much about the moving parts here. Anything is helpful :)

Laravel comes with out of the box broadcasting tool : Laravel Echo.
And you can either use it with a local instance (by deploying Redis for example), or you can use an API or an external tool (Socket.IO, Pusher ..)
Take a look at the documentation https://laravel.com/docs/5.8/broadcasting

Related

Using Laravel Events without Pusher nor Redis?

I am surprised that I need third-party services such as Pusher or Redis to have a bidirectional communication from my server to my clients through WebSockets.
What are the advantages of Pusher over Redis or simply a socker.io server aside from nginx? I see many disadvantages:
Rely on a third-party service
Pricy above 200k messages a day
Cannot work on LAN without Internet
From my understanding, they are only two possible solutions with Laravel:
Laravel Echo + Redis
Pusher
Laravel Websockets
Pusher Php Server
Is there a third alternative?
There is a clone of pusher server available on laravel, have you checked it?
https://beyondco.de/docs/laravel-websockets/getting-started/introduction
You can use this on LAN.
This runs a php-socket server on some port
like 5000
Just use Laravel Echo or Pusher SDK for mobile apps and
connect it to your server on 5000 port.
You don't have to pay anyone, it runs clone of pusher server on your
server.
The benefits of using a third party solution are different per use case and per person. However, broadly speaking there are a couple of benefits that haven't been mentioned here that are worth highlighting:
Hosted solutions do not require you to implement your own infrastructure to manage the websocket connections. This means you don't need to worry about the uptime, security, provisioning or maintenance of the infrastructure, this is done for you.
Hosted solutions scale seamlessly. As your app user base grows and your connections grow, you no longer need to provision more infrastructure and load balance/route connections.
Hosted solutions such as Pusher have dedicated support teams to help during implementation/troubleshooting.
Hosted solutions often have round the clock server monitoring, ensuring the platform is available 24/7 without the need for you to respond to server alarms in the early hours.
A lot has been said about build vs buy over the years, and there are many resources that discuss the merits of both (in fact Pusher has a resource for this). Ultimately this is not a decision that can be made for you, you will need to assess your application requirements and then look at what best fits your use case.

Enabling sockets.io in sails to run on multiple ports

I have to set sails app where I can have socket.io connections on multiple ports - for example authentication on port 3999 and data synchronization on port 4999.
Any way to do so ?
I asked a similar question yesterday and it seems that yours is also similar to mine, here's what I'm going to implement.
Given that you will have multiple instances that are going to work on different ports, they won't be able to talk to each other directly and that breaks websocket functionality.
It seems that there are multiple solutions to this (sticky sessions vs using the pub/sub functionality of Redis), I chose Redis. There's a module for that called socket.io-redis. You also need emitter module, it's here.
If you choose that route, no matter how many servers (multiple servers with multiple instances) OR many instances on a single server you run your app on, it will function without a problem thanks to Redis.
At least that's what I know for now, been searching for a few days, haven't tried it yet.
Not to mention, you can use Nginx for load balancing, like below. (Copied from socket.io docs)
upstream io_nodes {
ip_hash;
server 127.0.0.1:6001;
server 127.0.0.1:6002;
server 127.0.0.1:6003;
server 127.0.0.1:6004;
}

Do I *really* need RPC and NETBIOS to use transactional NServiceBus queues between local servers and Amazon EC2?

We have been trying - without success - to get transactional message queues working between local servers and our cloud servers up in Amazon EC2.
We're using NServiceBus, and have got the pub/sub examples and various other trivial apps working locally between here and EC2, but trying to spin up the components of our actual application is proving... vexatious.
As far as I can work out, to allow a local server (DYLAN-PC) to send a message transactionally via a queue on an Amazon EC2 instance, I will need to:
Enable NETBIOS name resolution (e.g. via the /etc/lmhosts file) at both ends
Allow RPC connections to be initiated from either end (so open port 135 for RPC plus various other ports)
Configure MSTDC on both systems, enabling remote connections and inbound/outbound connections
Have I missed something? In particular, the requirement to allow NetBIOS in an age where everything (including Active Directory!) runs on DNS seems particularly archaic. Are we doing something stupid trying to use MSMQ between sites like this? This is the first big project where we've tried this kind of distributed architecture, and the deployment/configuration is starting to hurt so much I'm convinced we've taken a wrong turn somewhere... a little perspective or advice would be gratefully received!
If you're look to build a geographically distributed system, where you can't arrange a VPN between these sites, you should be using the gateway capabilities of NServiceBus to communicate over alternate transports (like HTTP) between those sites.
RPC is required for reading from remote queues.
If you push to remote queues and pull from local queues, you won't be using RPC.

Socket.IO with RabbitMQ?

I'm currently using Socket.IO with redis store.
And I'm using Room feature with it.
So I'm totally okay with Room join (subscribe)
and Leave (unsubscribe) with Socket.IO.
I just see this page
http://www.rabbitmq.com/blog/2010/11/12/rabbitmq-nodejs-rabbitjs/
And I have found that some people are using Socket.IO with rabbitMQ.
Why using Socket.IO alone is not good enough?
Is there any good reason to use Socket.IO with rabbitMQ?
SocketIO is a browser --> server transport mechanism whereas RabbitMQ is a server --> server message bus.
The two can be implemented together to create a very responsive system in scenarios where a user journey consists of a message starting life on a browser and ending up in, say, some persistence layer (such as a database).
A message would be transported to the web server via socketIO and then, instead of the web server being responsible for persisting the message, it would drop it on a Rabbit queue and leave some other process responsible for persisting it. This way, the web server is free to return to its web serving responsibilities and, crucially, lessening its load.
Take a look at SockJS http://sockjs.org .
It's made by the RabbitMQ team
It's simpler than Socket.io
There's an erlang server for SockJS
Apart from that, there is an experimental project within RabbitMQ team that intends to provide a SockJS plugin for RabbitMQ.
I just used rabbitMQ with socket.io for a totally different reason than in the accepted answer. It wasn't that relevant in 2012, that's why I'm updating here.
I'm using a docker swarm deployment of a chat application with scalability and high availability. I have three replicas of the chat application (which uses socket.io) running in the cluster. The swarm cluster automatically load-balances the incoming requests and at any given time a client might get connected to any of the three replicas of the application.
With this scenario, it gets really necessary to sync the WebSocket responses in the replicas of the application because two clients connected to two different instances of the application wouldn't get each other's messages because they've been connected to different WebSockets.
This is where rabbitMQ intervenes. It syncs all the instances of the application and whenever a message is pushed from a WebSocket on a replica, it gets pushed by all replicas.
Complete details of the project have been given here. This is a potential use case of socket.io and rabbitMQ use in conjunction. This goes for any application using socket.io in a distributed environment with high availability and scalability.

OpenFire, HTTP-BIND and performance

I'm looking into getting an openfire server started and setting up a strophe.js client to connect to it. My concern is that using http-bind might be costly in terms of performance versus making a straight on XMPP connection.
Can anyone tell me whether my concern is relevant or not? And if so, to what extend?
The alternative would be to use a flash proxy for all communication with OpenFire.
Thank you
BOSH is more verbose than normal XMPP, especially when idle. An idle BOSH connection might be about 2 HTTP requests per minute, while a normal connection can sit idle for hours or even days without sending a single packet (in theory, in practice you'll have pings and keepalives to combat NATs and broken firewalls).
But, the only real way to know is to benchmark. Depending on your use case, and what your clients are (will be) doing, the difference might be negligible, or not.
Basics:
Socket - zero overhead.
HTTP - requests even on IDLE session.
I doubt that you will have 1M users at once, but if you are aiming for it, then conection-less protocol like http will be much better, as I'm not sure that any OS can support that kind of connected socket volume.
Also, you can tie your OpenFires together, form a farm, and you'll have nice scalability there.
we used Openfire and BOSH with about 400 concurrent users in the same MUC Channel.
What we noticed is that Openfire leaks memory. We had about 1.5-2 GB of memory used and got constant out of memory exceptions.
Also the BOSH Implementation of Openfire is pretty bad. We switched then to punjab which was better but couldn't solve the openfire issue.
We're now using ejabberd with their built-in http-bind implementation and it scales pretty well. Load on the server having the ejabberd running is nearly 0.
At the moment we face the problem that our 5 webservers which we use to handle the chat load are sometimes overloaded at about 200 connected Users.
I'm trying to use websockets now but it seems that it doesn't work yet.
Maybe redirecting the http-bind not via Apache rewrite rule but directly on a loadbalancer/proxy would solve the issue but I couldn't find a way on how to do this atm.
Hope this helps.
I ended up using node.js and http://code.google.com/p/node-xmpp-bosh as I faced some difficulties to connect directly to Openfire via BOSH.
I have a production site running with node.js configured to proxy all BOSH requests and it works like a charm (around 50 concurrent users). The only downside so far: in the Openfire admin console you will not see the actual IP address of the connected clients, only the local server address will show up as Openfire get's the connection from the node.js server.

Resources