Apache HTTP vs Ruby Rack Speed Comparison for a Webserver - ruby

I was planning on hosting some static webpages and I was interested in using Ruby Rack to spice things up. I was wondering if anyone knew the speed comparison and how many requests could be handled per second for the two options. Thanks!

Ruby Rack is rarely used on its own except for testing and almost always has some kind of server front-end in front of it. These need some kind of layer to manage the Rack processes.
Passenger is a popular choice and works with both Apache httpd and nginx. There are other more exotic arrangements for hosting Rack-based applications involving HAProxy or a hardware appliance.
To determine how many "requests per second" your stack can handle, you'll need to benchmark. Each application has an entirely different performance profile, and additional tuning can be done to various parts of your stack, all the way from hardware, operating system, database, choice of Ruby interpreter, web front-end and load-balancer.
Don't forget that most extremely high performance apps "cheat" enormously using caching to produce the impression of speed while deferring as many time-consuming operations as possible using a background job queue.
Remember it's usually more about the impression of speed than it is actual speed. If you can achieve page load times of ~20ms consistently this is a lot better than ~5ms with intermittent 5000ms spikes even if your average times are the same. People notice inconsistency more than actual performance.

Related

How can I test how many connections/active users does a Heroku dyno support for my app?

Is there a tool or a way to learn how many connections can manage my Heroku app simultaneously (with one dyno) before giving slow response times or time outs? I read of Blitz and New Relic but I am unsure of how to use them!
There's no quick and easy way to understand how your app scales. But the process usually goes along these lines:
Launch your target environment (a single dyno in your case)
Set up monitoring on all the possible metrics you care about. Usually this will include: CPU load, memory usage, I/O operations, database connections, etc. as well as any relevant applicative metrics. For Heroku, I recommend using Librato for a complete monitoring set.
Run load tests that resemble typical usages of your application, this means not just simple reads of static pages, but also dynamic operations such as user registrations, complex API calls, and anything else you think is relevant. The tools used here really depend on what your app does and how it is built.
See where you hit your limits, assume nothing, you might be bound by any of the resources you are using.
Resolve bottlenecks, rinse, repeat.
This will give your more or less a clue as to where your application will require further resources in order to scale.

Reasonable web server performance

I'm currently running some performance tests to see how many requests per second a newly developed web back-end can handle.
However, I have absolutely no idea how many requests per second I should expect the web server to handle (10? 100? 1000?).
I'm currently testing on a modest 1GB - 1 core virtual machine. What should be a reasonable minimum number of request/second such a server should be able to handle?
I think the right question you should be asking yourself is what performance goals I want my application to have when X requests are being handled?
Remember that a good performance test is always oriented in achieving realistic and well defined performance goals.
These goals are usually set by the performance team and the customers/stake holders.
There are many variables to this question;
What web server software are you using (Apache, nginx, IIS, lighttpd, etc)? This affects the lookup latency and how many simultaneous requests can be handled.
What language is your system logic written in (PHP, Ruby, C, etc)? Affects memory usage and base speed of execution.
Does your system rely on any external services (databases, remote services, message queues, etc)? I/O latency.
How is your server connected to the outside world (dedicated line, dial-up modem (!), etc.)? Network latency.
One way to approach this is to first discover how many requests your webserver can serve up in optimal conditions, eg. serving a single static HTML page of 1 byte with minimal HTTP headers. This will test the web server's fundamental receive-retrieve-serve cycle and give you a good idea of it's maximum throughput (handled requests per second).
Once you have this figure, serve up your web application and benchmark again. The difference in requests per second gives you a general idea of how optimal (or sub-optimal) your app is.
Even the most modest of hardware can deliver thousands of responses given the right conditions.

Is perl the fastest way to write a high performance page?

I was inspired by Slashdot, I was heard that it uses very limited servers to support a lot of users with fast response. And there is a website named slashcode, not sure if slashdot uses its source code.
I am wondering if Perl is the best to write a high performance web page? I know using Apache or IIS will be having a lot of overhead?
Any idea, books, papers, tutorials?
I'm going to assume that by "high performance" you mean both in the real time taken to produce a page and also how many it can serve concurrently.
The programming language isn't so important as your servers and algorithms. You may want to look into The C10k Problem which is a series of new technologies and refinement of techniques with the aim to allow a single web server to concurrently handle more than 10,000 concurrent connections. Things like the Nginx and lighttpd web servers and varnish cache came out of this project.
Big wins come from using a very light, very fast, very modular web server (Apache and IIS ain't it) with a very light, very fast cache in front of it to avoid having to process the same thing twice. For a high concurrency server, even caching for a few seconds can save you hundreds or thousands of processes. By chopping up a static page into a series of AJAX requests you can cache the more static bits and pieces independently of the bits that change frequently.
Instead of using mod_blah that embeds your program into a web server, use FastCGI or similar that puts your programs into their own little application servers. This allows them to run independent of the web server, possibly on remote machines and with load balancing. This lets you easily scale your processing power.
Eventually you're going to micro-optimize really important bits of your application code to the point where the language matters, but you can focus on the really important bits rather than having to do the whole project solely according to raw performance.
Regardless of how fast your code is, at some point the bottleneck will stop being your code, and start being the web server itself.
As long as you're not using the CGI interface[1] to talk to the web server, the language isn't going to have a noticeable impact on performance in 99% of cases. The exceptions are those in which you're doing heavy back-end processing rather than simply grabbing something out of a database, lightly massaging it, and sending it off to the user - and, if you are doing that kind of thing, you're likely better off doing it asynchronously if possible and stuffing the results into a database to be lightly massaged and viewed later.
The reason is, quite simply, that network connection and data transfer times will be so much longer than your program's execution time that it's not even funny. If it's taking 2 seconds to establish a network connection to the server and do the data transmission in each direction, nobody is going to care whether the processing on the server adds 0.1s or 0.2s on top of that 2s of network activity.
[1] Note that I am talking here about the vanilla CGI "start up a new process to service each incoming request" model, not the Perl CGI module (CGI.pm/use CGI). There are ways to use CGI while also making use of a long-lived process which handles multiple requests over its lifetime.
Architecture and system design are more important than language choice for a high traffic app.
But selecting a language is not the first thing you should do, unless you are planning to write everything from the ground up.
You should be selecting a toolset.
If you want to have something soonest, look at existing web applications. What meets your needs? How customizable is it? Does it meet your performance/scalability requirements? If so, the language you use will be the language your app uses.
If you can't find a good match in existing apps, look at different frameworks, Catalyst, Rails, Squatting, Camping, Jifty, Django. There's a nice list of them on Wikipedia.
You should be able to find a framework that will do the job, many of them. Pick some contenders and choose one. The language you use will be the language your framework uses.
There's really no such thing as a "high performance page". That's like asking what the fastest car is (and if you watch enough Top Gear, you know that's not a simple answer). You have to think about what you actually want to do (i.e. the particular task), what you have to do to make that happen, and which tools would work best for that.
Are you going to have a lot of people doing a lot of small things, or fewer people doing really big things? Is it all going to happen at once (i.e. spikes), or is it going to be constant demand? Are you send back small chunks of data or serving up really large files?
Suppose that every portion were as fast as possible. It's a fantasy for sure, but consider it anyway. Now that everything is fast as possible, rank every part according to how relatively fast they are. What's the slowest part? Is it disk access? Network IO? Socket availability?
If you aren't at the point where you're already thinking about this, the language probably isn't that important beyond your skill with it.
There are a lot of books on web performance out there. :)
This post on serverfault suggestst that you could write an extension module to nginx for serving dynamic content.
Such modules need to be compiled to native machine code, so most likely are faster than running Perl.
I don't believe it would be faster than other common choices such as PHP, Python, Ruby, Java, or C#.

How many connections/how much bandwidth can Apache handle?

This is a request for pointers to good documentation/good articles. I'm looking for information on how many connections an Apache server can reasonably handle, and potentially how to load balance between multiple servers. I've done Google searches but it's harder for beginners to judge what are good docs.
Apache 1.3 had some nasty scalability limitations, but later versions are designed to scale with the hardware and operating system, making them the bottleneck rather than the web server itself. As always, though, it comes down to how you configure and tune it if you want uber performance. Each situation has its own demands, and they're documented here:
http://httpd.apache.org/docs/2.2/misc/perf-tuning.html
The above assumes you're serving static content, which is where Apache excels. If you run webapps behind it, that's your bottleneck, not Apache.
Unfortunately you'll be disappointed.
Apache's ability to handle connections (and indeed any other web server's) is limited by what the web application sitting on top of it is doing. If you're serving static pages, you will be able to serve a lot of requests with very little hardware.
Depending on the IO workload (Apache cannot work faster than the IO subsystem - install enough ram to cache your entire content, if you can), you will be able to fill up a gigabit network on any reasonable spec modern box.
Once you've filled a gigabit network, you'll have other things to worry about.
But the reasons that you really need load balancers are because your application slows down Apache and uses up the box's resources. Your application will not be infinitely fast, nor infinitely scalable. You'll need to address those issues.
As the previous answers have pointed out it is generally not the case that Apache becomes the bottleneck, instead it is usually the application server (PHP, Mongrel, etc). However, if you are only serving static content then you will want to do some benchmarking to see how fast it can go. Of course it is unlikely to peg the exact number which Apache will be able to serve since a lot depends on how you configure it (e.g. disabling persistent connections) and the specs of the server. However to get a ballpark estimate you can use this benchmark as a reference since it is run on 1-8 cores (using one or two servers) so you should be able to find something reasonably comparable to the hardware you are considering.
Of course in order to get the most accurate results you will want to test it yourself using a load generator like ab or httperf.

Scalability and Performance of Web Applications, Approaches?

What various methods and technologies have you used to successfully address scalability and performance concerns of a website? I am an ASP.NET web developer exploring .NET remoting with WCF with SQL clustering and am curious as to what other approaches exist (such as the ‘cloud’). In which cases would you apply various approaches (for example method a for roughly x many ‘active’ users).
An example of what I mean, a myspace case study: http://highscalability.com/myspace-architecture
This is a very broad question making it difficult to answer, but I'll try and provide a few general suggestions.
1 - Unless you are doing some things seriously wrong then you'll likely not need to worry about perf or scale until you hit a significant amount of traffic (over 1 million page views a month).
2 - Your biggest performance problems initially are likely to be the page load times from other countries. Try the Gomez Instance Site Test to see the page load times from around the world, and use YSlow as a guide for optimizing.
3 - When you do start hitting performance problems it will first most likely be due to the database work. Use the SQL Server Profiler to examine your SQL traffic looking for long running queries to try optimizing, and also use dm_db_missing_index_details to look for indexes you should add.
4 - If your web servers start becoming the performance bottleneck, use a profiler to (such as the ANTS Profiler) to look for ways to optimize your web pages code.
5 - If your web servers are well optimized and still running too hot, look for more caching opportunities, but you're probably going to need to simply add more web servers.
6 - If your database is well optimized and still running too hot, then look at adding a distributed caching system. This probably won't happen until you're over 10 million page views a month.
7 - If your database is starting to get overwhelmed even with distributed caching, then look at a sharding architecture. This probably won't happen until you're over 100 million page views a month.
I've worked on a few sites that get millions/hits/month. Here are some basics:
Cache, cache, cache. Caching is one of the simplest and most effective ways to reduce load on your webserver and database. Cache page content, queries, expensive computation, anything that is I/O bound. Memcache is dead simple and effective.
Use multiple servers once you are maxed out. You can have multiple web servers and multiple database servers (with replication).
Reduce overall # of request to your webservers. This entails caching JS, CSS and images using expires headers. You can also move your static content to a CDN, which will speed up your user's experience.
Measure & benchmark. Run Nagios on your production machines and load test on your dev/qa server. You need to know when your server will catch on fire so you can prevent it.
I'd recommend reading Building Scalable Websites, it was written by one of the Flickr engineers and is a great reference.
Check out my blog post about scalability too, it has a lot of links to presentations about scaling with multiple languages and platforms:
http://www.ryandoherty.net/2008/07/13/unicorns-and-scalability/
There is velocity from MS as well as MEMCache has a port to .NET now and also indeXus.Net

Resources