Confused between Socket.IO and Nats.io Which to use in production? - socket.io

Hope all are fine, I have some confusion regarding Socket.IO and Nats.IO currently i have a mobile application which is using Socket.Io to broadcast crypto price data every 500 milliseconds to each active client.
Currently application have 25k active user's and 400-500 clients connected to socket every time continuously now i have heard of micro-services and the first one come to my mind is Nats.Io.
Now i'm confused between this two which protocol is right for my resources regarding of CPU, RAM etc.
Please suggest do i need to move my production to Nats.Io for better performance and low the consumption of resources.

I think you should connect with Socket.io right now. Because Socket.io have a good community to connect with any issues.

Related

Scaling phoenix on heroku

I dont have a tonne of experience with heroku, and even less with phoenix, so this may be a stupid question... but want to make sure I am making a good choice on hosting :)
From what I understand, the way you scale phoenix is add another server, launch another node, and connect them, then let BEAM / OTP work its magic to handle work load balancing. On heroku, dynos can't really talk together over a local network, which from what I understand is something that BEAM requires to cluster. So adding dynos will result in a more "traditional" scaling model, where you have an external load balancer balancing connections between unconnected nodes, with the db being shared state.
My question here is how big of an impact will this have? Is it more only an issue when you are hitting serious levels of load / scale, or will it mean spending a lot more money on infrastructure then is necessary?
You'll get the best performance on a host that supports clustering, but Phoenix has a PubSub adapter system exactly for deployments like heroku:
https://github.com/phoenixframework/phoenix_pubsub
One line config change and mix.exs deps entry and you'll have multinode channels on heroku via our Redis adapter.
This is very open question, so I am sure my answer won't be comprehensive.
In your situation the most important question is: will I Phoenix use channels?
If you use plain old HTTP, it can be mostly stateless. There are lots of methods to simulate stateful connection like storing sessions in cookies. At the end of the day, it doesn't matter if your backend servers are connected with each other, because each of them is doing independent computations. Your load balancer can randomly select any server and it will always work. This cool feature of http enables this protocol to scale so well. You can definitely use Heroku in that scenario and it will work great.
If you use Phoenix channels, things get complicated. You still want to be able to connect to any of the servers, but you will probably send messages to other users real time and they can be connected to other servers. Phoenix solves this problem for you by clustering using BEAM and this will be hard on Heroku. Or even impossible.
To sum up: it is not a question of small scale/big scale. It is a question of features. Scaling channels will require clustering, scaling plain old HTTP will not.

Websockets server implementation real performance for high concurrent production environment

I'm evaluating the substitution of some http pooling features of my production application with the new JEE7 supported Websocket feature. I'm planning to use Wildfly 8 as my next production server environment and I've migrated some of my websockets compatible modules with good results on development time; but I have the doubt about how it will work on production and what performance will have the websockets implementation on a a high load enviroment.
I´ve been searching documentation about the most used JEE servers but the main manufacturers haven´t yet a production JEE7 enviroment and when they have a JEE7 version, they haven´t enought documentation about how the implementation works or some values of maximum concurrency users. In addition, some not official comments says websocket connections are associated "with a server socket" but this seems to be not very efficient.
We might assume the Websocket is used only for receive data from the client point of view and we assume each user will receive, for example, an average of 10 messages per minute with a little json serialized object (a typical model data class). My requirement is more like a stock market than a chat application, for example.
What´s the real performance can I expect for a Websockets use on production enviroment in Wildfly 8 Server, for example? Also I´m interested about some comparision with another JEE7 implementations you are acquainted with.
Web sockets are TCP/IP based, no matter the implementation they will always use a socket (and hence an open file).
Performance wise it really depends on what you are doing, basically how many clients, how many requests/sec per client, and obviously on how big your hardware is. Undertow is based on non-blocking IO and is generally pretty fast, so it should be enough for what you need.
If you are testing lots of clients just be aware that you will hit OS based limits (definitely open files, and maybe available ports depending on your test).

Faye & Ruby or Node.js for scalability

I'm looking to prototype a web app that will use sockets to push a gentle stream of messages to mobile web app clients. I want to pick an architecture that will work for a large number of clients if/when it moves to production (so i dont have to change later)
I'd like to start with rails because its familiar and has a strong structure from the go meaning easier to prototype. I think Faye will provide what i need in terms of a pub-sub layer but am I going to create a bottleneck by using ruby and the high number of socket connections, or will Faye isolate/protect Ruby server from that load, if you follow?
At the outset the load will not be significant so it won't matter, i just don't want to be hobbled later on when there are a lot of socket connections and i wish i used node.js ! Server side JS would be fairly new to me but I guess there are benefits in that the JS app can include the client side also
Advice appreciated.
You can take a look at https://github.com/faye/faye-redis-node.
This plugin provides a Redis-based backend for the Faye messaging server. It allows a single Faye service to be distributed across many front-end web servers by storing state and routing messages through a Redis database server

WebSocket cross-connection communication (Tornado?)

I'm fumbling around a bit with WebSockets, and was pretty pleased with how easy it was to get a Tornado server running that does basic websocket connections. I've never used Tornado before today, and while I like what I've seen there's a few questions that I have regarding it's use.
Primarily, I'm using WebSockets so that I can have low-overhead communications between two or more client machines. (For the purposes of conversation let's just say it's a chat client) Obviously I can connect into the server from multiple machines, and they can all push messages to the server and the server can respond, which is great! But that's not too much better than your standard AJAX requests. If I have a persistent connection I want to be able to push data to the clients as well. The simplest possible scenario is user 1 posts a message to the server and upon receiving it the server immediately pushes it to user 2.
So what would be a good way to accomplish that? As far as I can see in Tornado there's no way to communicate between connections other than placing the message in a datastore somewhere and having all the other connections poll for new info. That strikes me as terribly clunky though, because all you're really doing at that point is moving the polling process from the client to the server.
Of course, I may be barking up the wrong tree entirely here. It's certainly plausible that Tornado simply isn't the right tool for this job, and if that's the case I'd be happy to hear suggestions for alternatives!
Here is a chat server using tornado, WebSockets and redis: https://gist.github.com/pelletier/532067 (Updated: link fixed, thanks #SamidhT)
Though the answer has already been accepted: Using a different service still seems very inefficient to me. Why don't you just go with shared memory + conditional variables / semaphores? You sound like you got a standard Consumer-Producer problem

Ajax instant messaging (web-based)

Just wondering: Would it be acceptable to start some simple Ajax instant messaging (web-based) for a large social network service (considering thousands of registered users)? I am new to this, so I'm just wondering. What if to check for a new message every two or three seconds?
Edited: Could a plain shared server handle so many requests every so often? And yes, I would roll my own program.
There're many web im client based on standard XMPP protocol. You could try iJab or JWChat.
It doesn't make sense to write your own, unless you have some unique requirements, but whether the server can handle this largely depends on the server language and webserver setup, as to how well it scales.
You will need to do some heavy load testing, to ensure that the high load that is expected will work, as your traffic will be very heavy. For example, if your social networking site is soccer related, then during the World Cup you may expect to see more traffic than Wed mornings.
If you asked the question with:
I want to use language X.
I want to use webserver Y.
I am using this framework for the
webservice.
I would like to accept voice
recordings and webcam recordings
over IM, as well as text.
How well will this scale on my one 1GHz server?
If you manage to make a peer-to-peer (P2P) browser-embedded chat, than even a shared hosting will do for dozens of thousands of simultaneous users :) :)

Resources