How to measure how much traffic a server could potentially handle? - traffic-measurement

Are there any tools that would enable me to load-test my server and tell me how much traffic it could roughly handle?
By traffic I mean how many requests per second it can consistently serve without timing out.
I realize that every server is different, and so is every application that runs on that server. That's why I thought this route may be the way to go.
Thanks a bunch!

For a very simple benchmark on web servers (if your request is the same every time), you could use ab. A very simple tool, but it gives some interesting statistics nonetheless.

If you dont mind going the paid software route, then LoadRunner is a very good choice, IMO.
I have used LoadRunner in the past for doing this kind of measurement for multiple web and non web applications.

And I like The Grinder. The only downside (but I don't know if others are able to do this) is that it doesn't replay well the ASP.NET hugely long generated URLs

Related

Does some optimized web servers for single page application exists?

When we do single page application, the webserver basically does only one things, it gives some data when the client asks them (using JSON format for example). So any server side language (php, ror) or tool (apache, ningx) can do it.
But is there a language/tool that works better with this sorts of single page applications that generates lot of small requests that need low latency and sometimes permanent connection (for realtime and push things)?
SocketStream seems like it matches your requirements quite well: "A phenomenally fast real-time web framework for Node.js ... dedicated to creating single-page real time websites."
SocketStream uses WebSockets to get lowest latency for the real-time portion. There are several examples on the site to build from.
If you want a lot of small requests in realtime by pushing data - you should take a look at socket type connections.
Check out Node.js with Socket.io.
If you really want to optimize for speed, you could try implementing a custom HTTP server that just fits your needs, for example with the help of Netty.
It's blazingly fast and has examples for HTTP and WebSocket servers included.
Also, taking a look at GWAN may be worthwile (though I have not tried that one yet).
http://en.wikipedia.org/wiki/Nginx could be appropriate

How to most quickly get small, very frequent updates from a server?

I'm working on the design of a web app which will be using AJAX to communicate with a server on an embedded device. But for one feature, the client will need to get very frequent updates (>10 per second), as close to real time as possible, for an extended period of time. Meanwhile typical AJAX requests will need to be handled from time to time.
Some considerations unique to this project:
This data will be very small, probably no more than a single numeric value.
There will only be 1 client connected to the server at a time, so scaling is not an issue.
The client and server will reside on the same local network, so the connection will be fast and reliable.
The app will be designed for Android devices, so we can take advantage of any platform-specific browser features.
The backend will most likely be implemented in Python using WSGI on Apache or lighttpd, but that is still open for discussion.
I'm looking into Comet techniques including XHL long polling and hidden iframe but I'm pretty new to web development and I don't know what kind of performance we can expect. The server shouldn't have any problem preparing the data, it's just a matter of pushing it out to the client as quickly as possible. Is 10 updates per second an unreasonable expectation for any of the Comet techniques, or even regular AJAX polling? Or is there another method you would suggest?
I realize this is ultimately going to take some prototyping, but if someone can give me a ball-park estimate or better yet specific technologies (client and server side) that would provide the best performance in this case, that would be a great help.
You may want to consider WebSockets. That way you wouldn't have to poll, you would receive data directly from your server. I'm not sure what server implementations are available at this point since it's still a pretty new technology, but I found a blog post about a library for WebSockets on Android:
http://anismiles.wordpress.com/2011/02/03/websocket-support-in-android%E2%80%99s-phonegap-apps/
For a Python back end, you might want to look into Twisted. I would also recommend the WebSocket approach, but failing that, and since you seem to be focused on a browser client, I would default to HTTP Streaming rather than polling or long-polls. This jQuery Plugin implements an http streaming Ajax client and claims specifically to support Twisted.
I am not sure if this would be helpful at all but you may want to try Comet style ajax
http://ajaxian.com/archives/comet-a-new-approach-to-ajax-applications

Web page with upload/download speed for a user

I want to make an URL that can be sent to the user. When he follows that URL, he'll get a page displaying upload/download speed for that user to the server the page is hosted on. Basically, I want to host my own speedtest. It'll be used for troubleshooting so fast to implement but dirty is better than neat and proper solution.
On the server I've got PHP, perl, python, apache and nginx and can use any of them. In what direction should I look?
I never like those speedtests, because they don't really represent your bandwidth accurately at all. Used to live in a university dorm with 100 MBit, and the pipe would indeed give me in excess of 90 MBit if properly utilized. But the speedtests would return no more than 15 MBit ever. Part of the problem is that web tests (a) don't do multiconnection tests, (b) run into various bottlenecks such as their own servers, their ISPs, or their TCP stacks, etc.
Ended up using speedtest. It allows to embed the speed tracker into your page.

Is perl the fastest way to write a high performance page?

I was inspired by Slashdot, I was heard that it uses very limited servers to support a lot of users with fast response. And there is a website named slashcode, not sure if slashdot uses its source code.
I am wondering if Perl is the best to write a high performance web page? I know using Apache or IIS will be having a lot of overhead?
Any idea, books, papers, tutorials?
I'm going to assume that by "high performance" you mean both in the real time taken to produce a page and also how many it can serve concurrently.
The programming language isn't so important as your servers and algorithms. You may want to look into The C10k Problem which is a series of new technologies and refinement of techniques with the aim to allow a single web server to concurrently handle more than 10,000 concurrent connections. Things like the Nginx and lighttpd web servers and varnish cache came out of this project.
Big wins come from using a very light, very fast, very modular web server (Apache and IIS ain't it) with a very light, very fast cache in front of it to avoid having to process the same thing twice. For a high concurrency server, even caching for a few seconds can save you hundreds or thousands of processes. By chopping up a static page into a series of AJAX requests you can cache the more static bits and pieces independently of the bits that change frequently.
Instead of using mod_blah that embeds your program into a web server, use FastCGI or similar that puts your programs into their own little application servers. This allows them to run independent of the web server, possibly on remote machines and with load balancing. This lets you easily scale your processing power.
Eventually you're going to micro-optimize really important bits of your application code to the point where the language matters, but you can focus on the really important bits rather than having to do the whole project solely according to raw performance.
Regardless of how fast your code is, at some point the bottleneck will stop being your code, and start being the web server itself.
As long as you're not using the CGI interface[1] to talk to the web server, the language isn't going to have a noticeable impact on performance in 99% of cases. The exceptions are those in which you're doing heavy back-end processing rather than simply grabbing something out of a database, lightly massaging it, and sending it off to the user - and, if you are doing that kind of thing, you're likely better off doing it asynchronously if possible and stuffing the results into a database to be lightly massaged and viewed later.
The reason is, quite simply, that network connection and data transfer times will be so much longer than your program's execution time that it's not even funny. If it's taking 2 seconds to establish a network connection to the server and do the data transmission in each direction, nobody is going to care whether the processing on the server adds 0.1s or 0.2s on top of that 2s of network activity.
[1] Note that I am talking here about the vanilla CGI "start up a new process to service each incoming request" model, not the Perl CGI module (CGI.pm/use CGI). There are ways to use CGI while also making use of a long-lived process which handles multiple requests over its lifetime.
Architecture and system design are more important than language choice for a high traffic app.
But selecting a language is not the first thing you should do, unless you are planning to write everything from the ground up.
You should be selecting a toolset.
If you want to have something soonest, look at existing web applications. What meets your needs? How customizable is it? Does it meet your performance/scalability requirements? If so, the language you use will be the language your app uses.
If you can't find a good match in existing apps, look at different frameworks, Catalyst, Rails, Squatting, Camping, Jifty, Django. There's a nice list of them on Wikipedia.
You should be able to find a framework that will do the job, many of them. Pick some contenders and choose one. The language you use will be the language your framework uses.
There's really no such thing as a "high performance page". That's like asking what the fastest car is (and if you watch enough Top Gear, you know that's not a simple answer). You have to think about what you actually want to do (i.e. the particular task), what you have to do to make that happen, and which tools would work best for that.
Are you going to have a lot of people doing a lot of small things, or fewer people doing really big things? Is it all going to happen at once (i.e. spikes), or is it going to be constant demand? Are you send back small chunks of data or serving up really large files?
Suppose that every portion were as fast as possible. It's a fantasy for sure, but consider it anyway. Now that everything is fast as possible, rank every part according to how relatively fast they are. What's the slowest part? Is it disk access? Network IO? Socket availability?
If you aren't at the point where you're already thinking about this, the language probably isn't that important beyond your skill with it.
There are a lot of books on web performance out there. :)
This post on serverfault suggestst that you could write an extension module to nginx for serving dynamic content.
Such modules need to be compiled to native machine code, so most likely are faster than running Perl.
I don't believe it would be faster than other common choices such as PHP, Python, Ruby, Java, or C#.

Monitoring my web application performance from the client browser

I want to be able to monitor the performance(load time of the entire page, load times of individually downloaded js/cs files , amount of memory used by the browser for the page,etc) of my web application from the perspective of the user(i.e the browser client).
Is there any tool/plugin , that can help me monitor all of these?
This list should serve as a good starting point for you. It is a complete list of end user monitoring tools.
In order to count as an End User Experience Monitoring tool it must be able to track the response times that real users experience when visiting the site – not a robot which is synthetically pinging the site. Specifically I am referring to tools that would enable IT operations to ensure that the real end users of an application or website are experiencing good performance. As I have alluded to in a previous post “speed solves a lot of problems” – claiming that even if your usability is not perfect – if it runs fast – people are less likely to notice.
We built a tool to solve this problem, take a look at Bucky.
Try FIDDLER or CHARLES
Try http://www.yottaa.com (disclaimer: i work here) - it runs real browsers in different locations to monitor your site and record asset level performance data. You can also try Gomez or Keynote Systems.
You can try out Atatus - https://www.atatus.com/ which offers performance monitoring and error tracking in one place for all your apps.
Disclaimer: I work at Atatus.

Resources