Is Unicorn useful for "typical" sites? - ruby

I have a small site on Heroku and am currently using Thin.
I've been vaguely aware of Unicorn but never felt like I had something that fit its "fast client" stipulation.
The readme and this link suggest that we're talking about only using Unicorn on a LAN (or maybe Lambdarail) but it seems like lots of people are using it for typical sites accessed by normal broadband and maybe even mobile networks. Is this true? What gives?

Unicorn is typically used behind an webserver/proxy like Nginx which receives the HTTP connection from the actual client, serves static assets and forwards dynamic requests to the backend server (Unicorn).
The webserver now acts as a client to Unicorn. Because Nginx (and for most cases Apaches mod_proxy) act as a store-and-forward proxy. I.e. they will first buffer the full response (or at least as much as fits into its buffer) before sending it to the client. And this nicely fits Unicorn's definition of a fast client. It hands the difficult task of caching and serving the data to slow clients to the webservers which have to do it anyway and thus can probably do it much better.
It also suggests that you should probably not run Unicorn directly facing to a client (unless your clients consume the data fast (e.g. on a LAN with non-congested clients and network).

We're using unicorn on heroku and having good results with it. What the unicorn site doesn't distinguish is that there's a difference between unicorn serving dynamic data vs. static assets. If you are offloading asset serving to a CDN, there's not much difference in unicorn with or without nginx in front. Once caveat to this - raw unicorn is vulnerable to an 'intentionally' slow client, such as might be introduced in a DDoS or other hack attempt.

Related

Throughput capabilities of Heroku proxy server

How many requests can Heroku's "Vegur" http proxy handle for a simple "hello world" before hitting the limits (if any)?
Will setting up nginx with ec2 micro, serving same index.html, allow more Throughput ?
Does heroku throttle the requests per dyno?
Heroku Dynos are all small processes running on EC2 machines behind the scenes. Therefore, it will almost always be more performant to run identical code on an EC2 server directly as opposed to Heroku, because when you're using Heroku you're sharing a server with other developers.
With that said, Heroku isn't really about having the fastest server -- it's about simplifying your entire development and deployment stack as much as possible to:
Avoid downtime.
Force you to architect code properly.
Make it easier to scale your project as it grows.
etc.

web2py - do I need another server for a real time application?

In the web2py examples there is a websocket example which uses tornado here:
gluon/contrib/websocket_messaging.py and this requires another server to be started namely tornado. My questions is, do I need another server? Should I only have one server to handle both the websocket stuff and the normal http requests?
Also, it seems tornado is the server of choice for the 2nd server, could that be something different?
I'm a bit of a newbie to websockets (and webapp development) so any comments/links that would help me better understand this would be appreciated.
Python WSGI based frameworks such as web2py are typically served via threaded web servers. A typical HTTP request occupies one of the server threads only very briefly in order to receive the incoming request and deliver the response, then freeing the thread to serve another incoming request.
Websockets (and long polling), on the other hand, require a long-lived connection between the client (i.e., browser) and the web server. A websocket connection will therefore occupy a thread indefinitely, so you can only have as many connections as you have threads, thus limiting the application to a relatively small number of concurrent users.
In order to enable many simultaneous websocket connections, it is therefore best to serve websockets via a server that features non-blocking network I/O, such as Tornado. For more details, see http://www.tornadoweb.org/en/stable/guide/async.html.
Another option is to use Gevent with monkey patching, which can be used in the context of a WSGI application as described here. Keep in mind, though, that any libraries you use that involve network I/O (such as database drivers) must be compatible with this approach (either via monkey patching or code explicitly designed for coroutines).
If realtime/server-push functionality is a major aspect of your application, and especially if you are new to web development, you might instead consider a framework built for this specific use case, such as Meteor.

Where should I install varnish?

So my web app is hosted on amazon using Opswork.
Currently I have a 1 dedicated instance for Postgresql, 1 instance as my webserver, and another dedicated instance running Redis for caching purposes.
I would like to improve the performance by adding Varnish. Given my architecture where should I install varnish? and also taking into account I may soon outgrow this solucion and be using more webservers behind a loadbalancer.
Any help would be appreciated!
Bye
Varnish will always be quicker if you run it with memory storage - so the one with the most free memory would be a good pick. Even if you don't have enough to spare for the storage, it also uses quite some memory for the connection handling when you reach a bit more traffic.
Further along the road when you want a load balancer a good start would be to use a dedicated server for varnish that also can do load balancing just fine. It's not as effecient as a lightweight dedicated loadbalancer but until you need multiple varnish servers (way down the road) there is generally no point in using anything before it.
You should use Varnish in front of Apache Web Server. However it's fine to reside on Web Server itself and point Load Balancers to Varnish.

Why should one use a http server in front of a framework web server?

Web applications frameworks such as sinatra (ruby), play (scala), lift (scala) produces a web server listening to a specific port.
I know there are some reasons like security, clustering and, in some cases, performance, that may lead me to use an apache web server in front of my web application one.
Do you have any reasons for this from your experience?
Part of any web application is fully standardized and commoditized functionality. The mature web servers like nginx or apache can do the following things. They can do the following things in a way that is very likely more correct, more efficient, more stable, more secure, more familiar to sysadmins, and more easy to configure than anything you could rewrite in your application server.
Serve static files such as HTML, images, CSS, javascript, fonts, etc
Handle virtual hosting (multiple domains on a single IP address)
URL rewriting
hostname rewriting/redirecting
TLS termination (thanks #emt14)
compression (thanks #JacobusR)
A separate web server provides the ability to serve a "down for maintenance" page while your application server restarts or crashes
Reverse proxies can provide load balancing and fault tolerance for you application framework
Web servers have built-in and tested mechanisms for binding to privileged ports (below 1024) as root and then executing as a non-privileged user. Most web application frameworks do not do this by default.
Mature web servers are battle hardened and stable. By stable, I mean that they quite literally almost never crash. Your web application is almost certainly far less stable. This gives you the ability to at least serve a pretty error page to the user saying your application is down instead of the web browser just displaying a generic "could not connect" error.
Anecdotal case in point: nginx handles attack that would otherwise DoS node.js: http://blog.nodejs.org/2013/10/22/cve-2013-4450-http-server-pipeline-flood-dos/
And just in case you want the semi-official answer from Isaac Schluetter at the Airbnb tech talk on January 30, 2013 around 40 minutes in he addresses the question of whether node is stable & secure enough to serve connections directly to the Internet. His answer is essentially "yes" it is fine. So you can do it and you will probably be fine from a stability and security standpoint (assuming you are using cluster to handle unexpected termination of an app server process), but as detailed above the reality of current operations is that still almost everybody runs node behind a separate web server or reverse proxy/cache.
I would add:
ssl handling
for some servers like apache lots of modules (i.e.
ntml/kerberos authentication)
Web servers are much better for some things compared to your application, like serving static.
Quite often the frameworks do everything you need, but sometimes, adding a layer on top of that can give you seemingly free functionality like compression, security, session management, load balancing, etc. Still, adding a web server may also introduce security issues, for example, chances are your web server security may be compromised easier than Lift by itself. Also, some of the web frameworks are extremely scalable and may even be hampered by an ill chosen web server.
In summary, if you require web server like functionality that is not provided by the framework, then a web server may be a very good option, but keep in mind that it's one more thing to configure properly and update regularly with security patches, etc.
If for example, you just need encryption, or compression, then you may find that adding the correct library or plug-in to your framework may do just that (and only that)
With a proxy http server, the framework doesn't need to keep an http connection open for giving the computed content and can then start serving some other request. It acts as a buffer.
It's an issue of reinventing the wheel. Most frameworks will give you a development environment but for production it's usually good practice to use a commercial/open source project that is able to deal with all issues that arise during production.
Guys building a Framework will have the framework to concentrate on whilst guys building a server are doing just the same(perfecting).

Disadvantages to rack-cache vs. Varnish in Heroku cedar stack?

The previous 2 Heroku application stacks came with a Varnish layer which automatically reverse-proxy-cached content based on http headers.
The new Heroku cedar stack doesn't have this Varnish layer. Heroku suggests using rack-cache and memcache instead.
Does this have disadvantages compared to the previous stacks with the varnish layer? With rack-cache, aren't there fewer servers serving the caching layer, and in a less optimized way?
http://devcenter.heroku.com/articles/http-routing
http://devcenter.heroku.com/articles/http-caching
http://devcenter.heroku.com/articles/cedar
There is absolutely no question that removing the varnish layer is a huge downgrade in cache performance -- both latency and throughput -- from Heroku's bamboo to cedar stack. For one, your application requests are competing with, and may be queued behind, cache hits for dyno time.
The disadvantages, to name a few, are: interpreted ruby (vs. compiled C) application level (vs. proxy level) memcached based (vs. in process memory based) blocking I/O based (vs. nonblocking I/O based). Any suggestion that the two caching strategies could compete at scale is ridiculous. You won't see much difference on a small scale. But if you have hundreds or even thousands of cache hits per second, varnish will not substantially degrade, while the cedar stack would require many dynos just to serve static assets in a performant fashion. (I'm seeing ~ 5-10ms dyno processing time for cache hits on cedar).
Cedar was built the way it was built not to improve performance, but to introduce new levels of flexibility over your application. While it allows you to do non blocking I/O operations like long polling, and fine grained control over static asset caching parameters, it's clear that heroku would like you to bring your own caching/bandwidth if your goal is internet scale.
I don't know how the rack-cache performance compares to Varnish in terms of raw requests. The best bet would be to create a simple app benchmark it and switch stacks.
It's worth bearing in mind that because the heroku.com stack was Nginx->Varnish->App as long as you've set the correct HTTP headers you're App layer will be doing much less work. As most of the delivery will be handled by Varnish, as well as Varnish being pretty fast, this frees up your Dyno for actual app-handling-requests.
With the herokuapp.com stack hitting your app earlier, it's down to you and your app to handling caching efficiently, this might mean you choose to use rack-cache to cache full page output or you might choose to use memcached to deal with database requests or both.
In the end it depends on what your app is doing, if it's serving the same static content to a lot of users then you'd benefit from Varnish, but if you've got an application where users login and interact with content then you won't see might benefit from Varnish so caching partial content or raw database queries might be more efficient. If you install the New Relic addon you'll be able to take a peek under the hood and see what's slowing your app down.
There are also 3rd party options for hosted Varnish. I wrote a quick post about setting it up with Fastly/Varnish.
Fastly is a hosted Varnish solution that will sit in front of your Heroku app and serve cached responses.
Update link:
https://medium.com/#harlow/scale-rails-with-varnish-http-caching-layer-6c963ad144c6
I've seen really great response times. If you can get a good cache hit-rate with Varnish you should be able to throttle back a good percentage of your dyno's.
A more modern answer, with 20/20 hindsight:
To get the caching performance approaching that of bamboo on cedar-14, this is the usual pattern:
Configure Rails to generate appropriate caching headers (i.e., Etags, Cache-Control, Last-Modified, etc.)
Stick fastly (varnish as a service) or cloudflare in front of your app. If app headers were set correctly, profile-like pages that need to be fresh won't be cached, as opposed to static assets.
I would recommend redis-rails as a rack cache backend, if you're doing caching at multiple layers (FE (CF/FY), page, action, fragment, etc.).

Resources