In order to scale a websocket framework you usually have to employ some kind of load balancing in order to ensure each client stays connect to the same server (such as socket.io). Does the new "heroku improved router" require this type of loading balancing? Or will web sockets keep a connection to the same server?
The relevant documentation for websockets on Heroku is at https://devcenter.heroku.com/articles/websockets
You can see the requirements relative to load-balancing in Application Architecture, where it is recommended not to rely on sticky sessions, but to rather use a back-end system that can allow the state to be made available to more instances.
The sticky session approach would tend to work very poorly on a platform such as Heroku where dynos and back-end instances can be moved, restarted, or stopped at least once a day; every instance of these would look like a netsplit and would leave your applications in a possibly undesirable state.
Related
I dont have a tonne of experience with heroku, and even less with phoenix, so this may be a stupid question... but want to make sure I am making a good choice on hosting :)
From what I understand, the way you scale phoenix is add another server, launch another node, and connect them, then let BEAM / OTP work its magic to handle work load balancing. On heroku, dynos can't really talk together over a local network, which from what I understand is something that BEAM requires to cluster. So adding dynos will result in a more "traditional" scaling model, where you have an external load balancer balancing connections between unconnected nodes, with the db being shared state.
My question here is how big of an impact will this have? Is it more only an issue when you are hitting serious levels of load / scale, or will it mean spending a lot more money on infrastructure then is necessary?
You'll get the best performance on a host that supports clustering, but Phoenix has a PubSub adapter system exactly for deployments like heroku:
https://github.com/phoenixframework/phoenix_pubsub
One line config change and mix.exs deps entry and you'll have multinode channels on heroku via our Redis adapter.
This is very open question, so I am sure my answer won't be comprehensive.
In your situation the most important question is: will I Phoenix use channels?
If you use plain old HTTP, it can be mostly stateless. There are lots of methods to simulate stateful connection like storing sessions in cookies. At the end of the day, it doesn't matter if your backend servers are connected with each other, because each of them is doing independent computations. Your load balancer can randomly select any server and it will always work. This cool feature of http enables this protocol to scale so well. You can definitely use Heroku in that scenario and it will work great.
If you use Phoenix channels, things get complicated. You still want to be able to connect to any of the servers, but you will probably send messages to other users real time and they can be connected to other servers. Phoenix solves this problem for you by clustering using BEAM and this will be hard on Heroku. Or even impossible.
To sum up: it is not a question of small scale/big scale. It is a question of features. Scaling channels will require clustering, scaling plain old HTTP will not.
In the web2py examples there is a websocket example which uses tornado here:
gluon/contrib/websocket_messaging.py and this requires another server to be started namely tornado. My questions is, do I need another server? Should I only have one server to handle both the websocket stuff and the normal http requests?
Also, it seems tornado is the server of choice for the 2nd server, could that be something different?
I'm a bit of a newbie to websockets (and webapp development) so any comments/links that would help me better understand this would be appreciated.
Python WSGI based frameworks such as web2py are typically served via threaded web servers. A typical HTTP request occupies one of the server threads only very briefly in order to receive the incoming request and deliver the response, then freeing the thread to serve another incoming request.
Websockets (and long polling), on the other hand, require a long-lived connection between the client (i.e., browser) and the web server. A websocket connection will therefore occupy a thread indefinitely, so you can only have as many connections as you have threads, thus limiting the application to a relatively small number of concurrent users.
In order to enable many simultaneous websocket connections, it is therefore best to serve websockets via a server that features non-blocking network I/O, such as Tornado. For more details, see http://www.tornadoweb.org/en/stable/guide/async.html.
Another option is to use Gevent with monkey patching, which can be used in the context of a WSGI application as described here. Keep in mind, though, that any libraries you use that involve network I/O (such as database drivers) must be compatible with this approach (either via monkey patching or code explicitly designed for coroutines).
If realtime/server-push functionality is a major aspect of your application, and especially if you are new to web development, you might instead consider a framework built for this specific use case, such as Meteor.
I'm looking to prototype a web app that will use sockets to push a gentle stream of messages to mobile web app clients. I want to pick an architecture that will work for a large number of clients if/when it moves to production (so i dont have to change later)
I'd like to start with rails because its familiar and has a strong structure from the go meaning easier to prototype. I think Faye will provide what i need in terms of a pub-sub layer but am I going to create a bottleneck by using ruby and the high number of socket connections, or will Faye isolate/protect Ruby server from that load, if you follow?
At the outset the load will not be significant so it won't matter, i just don't want to be hobbled later on when there are a lot of socket connections and i wish i used node.js ! Server side JS would be fairly new to me but I guess there are benefits in that the JS app can include the client side also
Advice appreciated.
You can take a look at https://github.com/faye/faye-redis-node.
This plugin provides a Redis-based backend for the Faye messaging server. It allows a single Faye service to be distributed across many front-end web servers by storing state and routing messages through a Redis database server
Web applications frameworks such as sinatra (ruby), play (scala), lift (scala) produces a web server listening to a specific port.
I know there are some reasons like security, clustering and, in some cases, performance, that may lead me to use an apache web server in front of my web application one.
Do you have any reasons for this from your experience?
Part of any web application is fully standardized and commoditized functionality. The mature web servers like nginx or apache can do the following things. They can do the following things in a way that is very likely more correct, more efficient, more stable, more secure, more familiar to sysadmins, and more easy to configure than anything you could rewrite in your application server.
Serve static files such as HTML, images, CSS, javascript, fonts, etc
Handle virtual hosting (multiple domains on a single IP address)
URL rewriting
hostname rewriting/redirecting
TLS termination (thanks #emt14)
compression (thanks #JacobusR)
A separate web server provides the ability to serve a "down for maintenance" page while your application server restarts or crashes
Reverse proxies can provide load balancing and fault tolerance for you application framework
Web servers have built-in and tested mechanisms for binding to privileged ports (below 1024) as root and then executing as a non-privileged user. Most web application frameworks do not do this by default.
Mature web servers are battle hardened and stable. By stable, I mean that they quite literally almost never crash. Your web application is almost certainly far less stable. This gives you the ability to at least serve a pretty error page to the user saying your application is down instead of the web browser just displaying a generic "could not connect" error.
Anecdotal case in point: nginx handles attack that would otherwise DoS node.js: http://blog.nodejs.org/2013/10/22/cve-2013-4450-http-server-pipeline-flood-dos/
And just in case you want the semi-official answer from Isaac Schluetter at the Airbnb tech talk on January 30, 2013 around 40 minutes in he addresses the question of whether node is stable & secure enough to serve connections directly to the Internet. His answer is essentially "yes" it is fine. So you can do it and you will probably be fine from a stability and security standpoint (assuming you are using cluster to handle unexpected termination of an app server process), but as detailed above the reality of current operations is that still almost everybody runs node behind a separate web server or reverse proxy/cache.
I would add:
ssl handling
for some servers like apache lots of modules (i.e.
ntml/kerberos authentication)
Web servers are much better for some things compared to your application, like serving static.
Quite often the frameworks do everything you need, but sometimes, adding a layer on top of that can give you seemingly free functionality like compression, security, session management, load balancing, etc. Still, adding a web server may also introduce security issues, for example, chances are your web server security may be compromised easier than Lift by itself. Also, some of the web frameworks are extremely scalable and may even be hampered by an ill chosen web server.
In summary, if you require web server like functionality that is not provided by the framework, then a web server may be a very good option, but keep in mind that it's one more thing to configure properly and update regularly with security patches, etc.
If for example, you just need encryption, or compression, then you may find that adding the correct library or plug-in to your framework may do just that (and only that)
With a proxy http server, the framework doesn't need to keep an http connection open for giving the computed content and can then start serving some other request. It acts as a buffer.
It's an issue of reinventing the wheel. Most frameworks will give you a development environment but for production it's usually good practice to use a commercial/open source project that is able to deal with all issues that arise during production.
Guys building a Framework will have the framework to concentrate on whilst guys building a server are doing just the same(perfecting).
I have a small site on Heroku and am currently using Thin.
I've been vaguely aware of Unicorn but never felt like I had something that fit its "fast client" stipulation.
The readme and this link suggest that we're talking about only using Unicorn on a LAN (or maybe Lambdarail) but it seems like lots of people are using it for typical sites accessed by normal broadband and maybe even mobile networks. Is this true? What gives?
Unicorn is typically used behind an webserver/proxy like Nginx which receives the HTTP connection from the actual client, serves static assets and forwards dynamic requests to the backend server (Unicorn).
The webserver now acts as a client to Unicorn. Because Nginx (and for most cases Apaches mod_proxy) act as a store-and-forward proxy. I.e. they will first buffer the full response (or at least as much as fits into its buffer) before sending it to the client. And this nicely fits Unicorn's definition of a fast client. It hands the difficult task of caching and serving the data to slow clients to the webservers which have to do it anyway and thus can probably do it much better.
It also suggests that you should probably not run Unicorn directly facing to a client (unless your clients consume the data fast (e.g. on a LAN with non-congested clients and network).
We're using unicorn on heroku and having good results with it. What the unicorn site doesn't distinguish is that there's a difference between unicorn serving dynamic data vs. static assets. If you are offloading asset serving to a CDN, there's not much difference in unicorn with or without nginx in front. Once caveat to this - raw unicorn is vulnerable to an 'intentionally' slow client, such as might be introduced in a DDoS or other hack attempt.