Reasonable web server performance - performance

I'm currently running some performance tests to see how many requests per second a newly developed web back-end can handle.
However, I have absolutely no idea how many requests per second I should expect the web server to handle (10? 100? 1000?).
I'm currently testing on a modest 1GB - 1 core virtual machine. What should be a reasonable minimum number of request/second such a server should be able to handle?

I think the right question you should be asking yourself is what performance goals I want my application to have when X requests are being handled?
Remember that a good performance test is always oriented in achieving realistic and well defined performance goals.
These goals are usually set by the performance team and the customers/stake holders.

There are many variables to this question;
What web server software are you using (Apache, nginx, IIS, lighttpd, etc)? This affects the lookup latency and how many simultaneous requests can be handled.
What language is your system logic written in (PHP, Ruby, C, etc)? Affects memory usage and base speed of execution.
Does your system rely on any external services (databases, remote services, message queues, etc)? I/O latency.
How is your server connected to the outside world (dedicated line, dial-up modem (!), etc.)? Network latency.
One way to approach this is to first discover how many requests your webserver can serve up in optimal conditions, eg. serving a single static HTML page of 1 byte with minimal HTTP headers. This will test the web server's fundamental receive-retrieve-serve cycle and give you a good idea of it's maximum throughput (handled requests per second).
Once you have this figure, serve up your web application and benchmark again. The difference in requests per second gives you a general idea of how optimal (or sub-optimal) your app is.
Even the most modest of hardware can deliver thousands of responses given the right conditions.

Related

Why it is recommended to store images on remote server?

Sorry for such a question, but I can not find any article on the web with cons on that, I guess it is about async uploading and downloading, but it's just a guess, is there somewhere a detailed info?
It's mostly about specialization, data locality, and concurrency.
Servers that are specialized at serving static content typically do so much faster than dynamic web servers (the web servers are optimized for the specific use-case).
You also have the advantage of storing your content in many zones to achieve better performance (the content is physically closer to the person requesting it), where-as web applications typically should be near its other dependencies, such as databases.
Lastly browsers (for http/1 at least) only allow a fixed number of connections per server, so if your images and api calls are on separate servers, one cannot influence the other in terms of request scheduling.
There are a lot of other reasons I'm sure, but these are just off the top of my head.

Parameters that affect performance of web app on different clients

I have built a web application, in which I display a map and I use both WMS and WFS requests in order to show network (lines) and points of interest on the map. The application has several filters in order to send queries to database (such as date filters etc.).
The app runs in a remote server. The speed when accessing the app from my browser (firefox or chrome) is satisfying. Everything runs quite smoothly.
My issue is that I get complaints related to speed and performance. So my question is: on what does the performance depends on and how can it be possible that me having great experience and others don't?
One hypothesis is that the computer power of the other client is too low (which is not the case).
Another one is that the server doesn't have enough resources (which is also not the case).
What are other parameters that affect the performance of a web app?
The connection speed is a very important factor and also the response time (mostly distance client-server).

How do you achieve acceptable performance metrics for your web application?

What analysis do you currently perform to achieve performance metrics that are acceptable? Metrics such as page weight, response time, etc. What are the acceptable metrics that are currently recommended?
This is performance related, so 'it depends' :)
Do you have existing metrics (an existing application) to compare against? Do you have users that are complaining - can you figure out why?
The other major factor will depend on what network sits between the application and the users. On a LAN, page weight probably doesn't matter. On a very slow WAN, page size (esp WRT to TCP windowing) is going to dwarf the impact of server time.
As far as analysis:
Server response time, measured by a load test tool on the same network as the app
Client response time, as measured by a browser / client either on a real or simulated network
The workload for 1) follows the 80/20 rule in terms of transaction mix. For 2), I look at some subset of pages for a browser app and run empty cache and full cache cases to simulate new vs returning users.
Use webpagetest.org to get waterfall charts.
Do RUM, Real User Monitoring, using Google Analytics snippet with Site Speed Javascript enabled.
They are the bare minimum to do.

Can you use proxies to do load/stress testing on a server, with proxy serving as a sort of mirror?

Suppose I want to test a server's and its web application's ability to handle many simultaneous connections well and show decent latency.
So ideally I would want a thousand machines to bombard it with usage requests, but that's not practicable. So instead, can I just make a testing script with a thousand threads to run on the same server and have them perform the testing, connecting to the server via a geographically far-away proxy?
My reasoning here is that the signal will have to travel realistically big distances to the proxy and back, so that sort of emulates the reality of real clients accessing the server.
Then again, to take this one step further, are there prepackaged emulators/frameworks that could perform a similar test without using internet at all, just simulating the latency of the network, realistically creating all the socket connections and other resource intensive stuff etc?
There are lots of load testing products (open source and commercial) that have bandwidth/latency simulation. Running through various remote proxies is overly complex. On the open source side, look at JMeter and Pylot. On the commercial side, my company, BrowserMob, provides a website load testing product that uses the Amazon cloud to generate load. We actually do generate load from thousands of machines/IPs, without breaking the bank :)

How many connections/how much bandwidth can Apache handle?

This is a request for pointers to good documentation/good articles. I'm looking for information on how many connections an Apache server can reasonably handle, and potentially how to load balance between multiple servers. I've done Google searches but it's harder for beginners to judge what are good docs.
Apache 1.3 had some nasty scalability limitations, but later versions are designed to scale with the hardware and operating system, making them the bottleneck rather than the web server itself. As always, though, it comes down to how you configure and tune it if you want uber performance. Each situation has its own demands, and they're documented here:
http://httpd.apache.org/docs/2.2/misc/perf-tuning.html
The above assumes you're serving static content, which is where Apache excels. If you run webapps behind it, that's your bottleneck, not Apache.
Unfortunately you'll be disappointed.
Apache's ability to handle connections (and indeed any other web server's) is limited by what the web application sitting on top of it is doing. If you're serving static pages, you will be able to serve a lot of requests with very little hardware.
Depending on the IO workload (Apache cannot work faster than the IO subsystem - install enough ram to cache your entire content, if you can), you will be able to fill up a gigabit network on any reasonable spec modern box.
Once you've filled a gigabit network, you'll have other things to worry about.
But the reasons that you really need load balancers are because your application slows down Apache and uses up the box's resources. Your application will not be infinitely fast, nor infinitely scalable. You'll need to address those issues.
As the previous answers have pointed out it is generally not the case that Apache becomes the bottleneck, instead it is usually the application server (PHP, Mongrel, etc). However, if you are only serving static content then you will want to do some benchmarking to see how fast it can go. Of course it is unlikely to peg the exact number which Apache will be able to serve since a lot depends on how you configure it (e.g. disabling persistent connections) and the specs of the server. However to get a ballpark estimate you can use this benchmark as a reference since it is run on 1-8 cores (using one or two servers) so you should be able to find something reasonably comparable to the hardware you are considering.
Of course in order to get the most accurate results you will want to test it yourself using a load generator like ab or httperf.

Resources