Paas for Websocket - heroku

I am looking for a WebSocket-enabled PAAS service. So far I have only experimented on Heroku and it works quite fine. Would you recommend other services?
Side question: I'm slightly worried about the billing. In the case of Heroku, it seems that usage is calculated via the time dynos are busy. I guess that in case of a Websocket connection, there may be a lot of idle time in between data exchange, and it would be fully billed anyway. Is that correct?

Heroku will bill you for the time the dyno is up, whether or not it is being used at all.
We've used Pusher as a complete websocket service, which allows you to asynchronously publish events from your main Heroku app and off-load the websocket connections and event publishing to Pusher.
They charge based on the volume of websocket traffic, which might be cheaper if you have a small volume or peaky traffic, and don't want to pay for a consistent set of dynos needed to service your peak traffic.

Related

Reconnect Interval

I am looking for best practices to handle server restarts. Specifically, I push stock prices to users using websockets for a day trading simulation web app. I have 10k concurrent users. To ensure a responsive ux, I reconnect to the websocket when the onclose event is fired. As our user base has grown we have had to scale our hardware. In addition to better hardware, we have implemented a random delay before reconnecting. The goal of this is to spread out the influx of handshakes when the server restarts ever night (Continuous Deployment). However some of our users have poor internet (isp and or wifi). Their connection constantly drops. For these users I would prefer they reconnect immediately. Is there a solution for this problem that doesn't have the aforementioned tradeoffs?
The question is calling for a subjective response, here is mine :)
Discriminating a client disconnection and a server shutdown:
This can be achieved by sending a shutdown message over the websocket so that active clients can prepare and reconnect with a random delay. Thus, a client that encounters an onclose event without a proper shutdown broadcast would be able to reconnect asap. This means that the client application needs to be modified to account for this special shutdown event.
Handle the handshake load: Some web servers can handle incoming connections as an asynchronous parallel event queue, thus at most X connections will be initialized at the same time (in parallel) and others will wait in a queue until their turn comes. This allows to safeguard the server performance and the websocket handshake will thus be automatically delayed based on the true processing capabilities of the server. Of course, this means a change of web server technology and depends on your use-case.

Using Laravel Events without Pusher nor Redis?

I am surprised that I need third-party services such as Pusher or Redis to have a bidirectional communication from my server to my clients through WebSockets.
What are the advantages of Pusher over Redis or simply a socker.io server aside from nginx? I see many disadvantages:
Rely on a third-party service
Pricy above 200k messages a day
Cannot work on LAN without Internet
From my understanding, they are only two possible solutions with Laravel:
Laravel Echo + Redis
Pusher
Laravel Websockets
Pusher Php Server
Is there a third alternative?
There is a clone of pusher server available on laravel, have you checked it?
https://beyondco.de/docs/laravel-websockets/getting-started/introduction
You can use this on LAN.
This runs a php-socket server on some port
like 5000
Just use Laravel Echo or Pusher SDK for mobile apps and
connect it to your server on 5000 port.
You don't have to pay anyone, it runs clone of pusher server on your
server.
The benefits of using a third party solution are different per use case and per person. However, broadly speaking there are a couple of benefits that haven't been mentioned here that are worth highlighting:
Hosted solutions do not require you to implement your own infrastructure to manage the websocket connections. This means you don't need to worry about the uptime, security, provisioning or maintenance of the infrastructure, this is done for you.
Hosted solutions scale seamlessly. As your app user base grows and your connections grow, you no longer need to provision more infrastructure and load balance/route connections.
Hosted solutions such as Pusher have dedicated support teams to help during implementation/troubleshooting.
Hosted solutions often have round the clock server monitoring, ensuring the platform is available 24/7 without the need for you to respond to server alarms in the early hours.
A lot has been said about build vs buy over the years, and there are many resources that discuss the merits of both (in fact Pusher has a resource for this). Ultimately this is not a decision that can be made for you, you will need to assess your application requirements and then look at what best fits your use case.

How to gauge the scalability of websockets in an application

I am struggling to find information on how to gauge the scalability of websockets. A scenario -
Let's say from client wants to establish socket connection from a browser, and the client application and service layer (Micronaut) both have two instances behind an elb - service layer will sit us-east region and can expect anyone from around the world can access the frontend app from browser and can expect an open connection for an avg of 2-5 min, no longer than 30 minutes.
Is there a ballpark number on how many concurrent websocket connections a couple servers can handle? Or if there are certain factors that I didn't mention that are vital to handling websocket connections in general?
Thank you in advance.
I'm assuming you want to know the scalability of the implementation of WS in Micronaut and not WS in general. Of course, the scalability of WS is dependent on the specific implementation and WS itself. You probably already know this, but wanted to state it for the record. You may also want to be sure you increase your file descriptors for your server process to the max number (you may have to adjust your kernel to increase the FDs).
Btw, don't forget to handle retries and reconnects as you would for a low-level TCP connection

Socket.IO with RabbitMQ?

I'm currently using Socket.IO with redis store.
And I'm using Room feature with it.
So I'm totally okay with Room join (subscribe)
and Leave (unsubscribe) with Socket.IO.
I just see this page
http://www.rabbitmq.com/blog/2010/11/12/rabbitmq-nodejs-rabbitjs/
And I have found that some people are using Socket.IO with rabbitMQ.
Why using Socket.IO alone is not good enough?
Is there any good reason to use Socket.IO with rabbitMQ?
SocketIO is a browser --> server transport mechanism whereas RabbitMQ is a server --> server message bus.
The two can be implemented together to create a very responsive system in scenarios where a user journey consists of a message starting life on a browser and ending up in, say, some persistence layer (such as a database).
A message would be transported to the web server via socketIO and then, instead of the web server being responsible for persisting the message, it would drop it on a Rabbit queue and leave some other process responsible for persisting it. This way, the web server is free to return to its web serving responsibilities and, crucially, lessening its load.
Take a look at SockJS http://sockjs.org .
It's made by the RabbitMQ team
It's simpler than Socket.io
There's an erlang server for SockJS
Apart from that, there is an experimental project within RabbitMQ team that intends to provide a SockJS plugin for RabbitMQ.
I just used rabbitMQ with socket.io for a totally different reason than in the accepted answer. It wasn't that relevant in 2012, that's why I'm updating here.
I'm using a docker swarm deployment of a chat application with scalability and high availability. I have three replicas of the chat application (which uses socket.io) running in the cluster. The swarm cluster automatically load-balances the incoming requests and at any given time a client might get connected to any of the three replicas of the application.
With this scenario, it gets really necessary to sync the WebSocket responses in the replicas of the application because two clients connected to two different instances of the application wouldn't get each other's messages because they've been connected to different WebSockets.
This is where rabbitMQ intervenes. It syncs all the instances of the application and whenever a message is pushed from a WebSocket on a replica, it gets pushed by all replicas.
Complete details of the project have been given here. This is a potential use case of socket.io and rabbitMQ use in conjunction. This goes for any application using socket.io in a distributed environment with high availability and scalability.

Is there a traffic limit on Apple's Push Notification Service?

Is there a traffic limit on Apple's PNS?
Documentation says:
You should also retain connections
with APNs across multiple
notifications. (APNs may consider
connections that are rapidly and
repeatedly established and torn down
as a denial-of-service attack.)
It seems to be heavy traffic allowed with only keeping connection.
Only rapid connect/disconnect case mentioned.
Really No traffic limit?
That is what they say :-) So go for it!

Resources