I have 5 Scenarios in total, and 70 Users segregated to different Scenarios which runs for around 15 Minutes only with 1 Loop configuration.
Is it ideal test duration to evaluate the realistic Performance results?
Or do I need to adjust with the test duration?
Any suggestion on this is highly appreciated.
Thanks
It depends on what you're trying to achieve. 70 concurrent users doesn't look like a real "load" to me, moreover given you have only one loop you may run into the situation when some users have already finished executing their scenarios and were shut down and some are still running or even have not yet been started. So I would recommend monitoring the real concurrency using i.e. Active Threads Over Time listener to see how many users were online at the given stage of the test.
Normally the following testing types are conducted:
Load testing - putting the system under anticipated load and ensuring that main metrics (i.e. response time and throughput) are matching NFRs or SLAs
Soak testing - basically the same as load testing, but it assumes prolonged duration (several hours, overnight or over the weekend). This testing type allows to discover obvious and non-obvious memory leaks
Stress testing - starting with anticipated number of users and gradually increasing the load until response time starts exceeding acceptable threshold or errors start occurring (whatever comes the first). If will shed some light on the slowest or most fragile component, to wit the first performance bottleneck
Check out Why ‘Normal’ Load Testing Isn’t Enough article for more information on the aforementioned performance testing types.
No matter which test you're conducting consider increasing (and decreasing) the load gradually, i.e. come up with proper ramp-up (and ramp-down) strategies, this way you will be able to correlate increasing load with i.e. increasing response time
Performance Tests in java is a bit tricky, it can vary wildly depending on what other programs are running on the system and what its load is.
In an ideal world, you need to use a dedicated system, if you can't make sure to quit all programs you're running (including the IDE), The Java HotSpot compiler kicks in when it sees a ‘hot spot’ in your code. It is therefore quite common that your code will run faster over time! So, you should adapt and repeat your testing methods, investigate memory and CPU usage.
or even better you can use a profiler. There are plenty around, both free profilers and demos / time-locked trials of commercials strength ones.
What is the use of Synchronizing Timer?
What is the purpose of "Std deviation" in summary report?
What is the difference between running the jmeter script in GUI and Command Prompt?
Synchronizing Timer:
Consider that you are load-testing.
Start 25 threads (with synchronizing timer disabled).
You will note that the start time of first thread will have a difference of about 800ms to 1000ms when compared to the last thread.
This ideally is not a good testing condition for loads.
Now consider the same scenario with synchronizing timer enabled. You will notice that the start time of all the threads is absolutely the same. Ideal scenario for load testing.
Std Deviation:
Standard deviation quantifies or indicates how much the response time varies around its mean or average. I would suggest not to judge the system performance based on Standard Deviation. In reality this just indicates how much the system is fluctuating. Nevertheless, Deviations should be minimum i.e. less than 5%.
GUI and CMD:
Lets just say that on one hand, the GUI makes the program more intuitive; on the other hand, it consumes more resources. JMeter GUI should only be used for test development or debugging. Personally, I do not advise using JMeter in GUI mode if you are initiating an actual load test.
JMeter official documentation defines Synchronizing Timer very well.
The purpose of the SyncTimer is to block threads until X number of threads have been blocked, and then they are all released at once. A SyncTimer can thus create large instant loads at various points of the test plan.
So, we can use Synchronizing Timer in order to create required loads. For example if we use 3000 value in Synchronizing Timer then all the requests will keep accumulating for 3 seconds and will be instantly released after 3 seconds, thus creating greater load.
Standard Deviation gives you an idea that how much variation is in results from the average. In general we can say that, a lower Std deviation value means good performance and higher std deviation value points to issues.
JMeter GUI mode is suitable only for creating scripts or debugging them. While performing actual load tests, JMeter should be run from CMD as it is more efficient and consumes less memory as compared to GUI mode. Check this JMeter blog on how to run JMeter from CMD.
I have some code that I have paralleled successfully in the sense that it gets an answer, but it is still kind of slow. Using cProfile.run(), I found that 121 seconds (57% of total time) were spent in cPickle.dumps despite a per call time of .003. I don't use this function anywhere else, so it must be occurring due to ipython's parallel.
The way my code works is it does some serial stuff, then runs many simulations in parallel. Then some serial stuff, then a simulation in parallel. It has to repeat this many, many times. Each simulation requires a very large dictionary that I pull in from a module I wrote. I believe this is what is getting pickled many times and slowing the program down.
Is there a way to push a large dictionary to the engines in such a way that it stays there permanently? I think it's getting physically pushed every time I call the parallel function.
Preface
Full page load time measured using pingdom is below expectations on a highly optimized page (Pagespeed score 97/100).
Next trial-and-error step was to test using different os/webserver combos. A relative old Windows 2003 IIS box served the static page almost 4 times faster (0.30s) than two different Apple boxes running Mac OS X build-in Apache (1.15s). Where Rackspace/Akamai CDN using Nginx loads the full page in 0.08 seconds or slower.
Question
Is anyone aware of some published test results or benchmark that compares the full web page load time of different os/web server combos?
Requirements:
using static content,
not measured on localhost,
preferrably splitting: start - connect, connect - first byte & first byte - last byte phases for each hit,
where the web servers are running on (almost) identical hardware,
in internet networks that are in comparable distance between testing client and serving host,
without measuring caching or DNS performance,
under no load (most webservers have low load not maxed out like most available benchmarks test)
Looking for a comparison that tries to deliver shortest output times of static content in a real world internet environment where os/webserver combo is the variable.
I suggest to read this decent article about benchmark results: http://nbonvin.wordpress.com/2011/03/14/apache-vs-nginx-vs-varnish-vs-gwan/
Seems G-WAN is the pure winner. I heard many major high-traffic sites switching to this already (since - in most cases - may serve content twice faster then Nginx. Basically allows a delay on hardware upgrades).
I'm working on a web application, and it's getting to the point where I've got most of the necessary features and I'm starting to worry about execution speed. So I did some hunting around for information and I found a lot about reducing page load times by minifying CSS/JS, setting cache control headers, using separate domains for static files, compressing the output, and so on (as well as basic server-side techniques like memcached). But let's say I've already optimized the heck out of all that and I'm concerned with how long it actually takes my web app to generate a page, i.e. the pure server-side processing time with no cache hits. Obviously the tricks for bringing that time down will depend on the language and underlying libraries I'm using, but what's a reasonable number to aim for? For comparison, I'd be interested in real-world examples of processing times for apps built with existing frameworks, doing typical things like accessing a database and rendering templates.
I stuck in a little bit of code to measure the processing time (or at least the part of it that happens within the code I wrote) and I'm generally seeing values in the range 50-150ms, which seems pretty high. I'm interested to know how much I should focus on bringing that down, or whether my whole approach to this app is too slow and I should just give it up and try something simpler. (Based on the Net tab of Firebug, the parts of processing that I'm not measuring typically add less than 5ms, given that I'm testing with both client and server on the same computer.)
FYI I'm working in Python, using Werkzeug and SQLAlchemy/Elixir. I know those aren't the most efficient technologies out there but I'm really only concerned with being fast enough, not as fast as possible.
EDIT: Just to clarify, the 50-150ms I quoted above is pure server-side processing time, just for the HTML page itself. The actual time it takes for the page to load, as seen by the user, is at least 200ms higher (so, 250-350ms total) because of the access times for CSS/JS/images (although I know that can be improved with proper use of caching and Expires headers, sprites, etc. which is something I will do in the near future). Network latency will add even more time on top of that, so we're probably talking about 500ms for the total client load time.
Better yet, here's a screenshot from the Net tab of Firebug for a typical example:
It's the 74ms at the top that I'm asking about.
IMHO, 50-150 ms on client side on server side is fine in most circumstances. When I measure the speed of some very known websites, I rarely see something as fast. Most of the times, it is about 250 ms, often higher.
Now, I want to underline three points.
Everything depends on the context. A home page or a page which will be accessed very frequently will suck a lot if it takes seconds to load. On the other hand, some rarely used parts of the website can take up to one second if optimizations are to expensive.
The major concern of the users is to accomplish what they want quickly. It's not about the time taken to access a single page, but rather the time to access information or to accomplish a goal. That means that it's better to have one page taking 250 ms than requiring the user to visit three pages one after another to do the same thing, each one taking 150 ms to load.
Be aware of the perceived load time. For example, there is an interesting trick used on Stack Overflow website. When doing something which is based on AJAX, like up/down-voting, first you see the effect, then the request is made to the server. For example, try to up-vote your own message. It will show you that the message is up-voted (the arrow will become orange), then, 200 ms later, the arrow will become gray and an error box will be displayed. So in the case of an up-vote, the perceived load time (arrow becomes orange) is 1 ms, when the real load time spent doing the request is 100 ms.
EDIT: 200 ms is fine too. 500 ms will probably hurt a little if the page is accessed frequently or if the user expects the page to be fast (for example, AJAX requests are expected to be fast). By the way, I see on the screenshot that you are using several CSS files and ten PNG images. By combining CSS into one file and using CSS sprites, you can probably reduce the perceived load time, especially when dealing with network latency.
Jakob Nielsen, a well known speaker on usability posted an article [1] on this a few days back. He suggests that under 1 second is deal, under 100ms is perfect as it interrupts the user flow a bit more.
As other users have pointed out it depends on the context of that page. If someone is uploading a file they expect a delay. If they're logging in and it takes ten seconds they can start to get frustrated.
[1] http://www.useit.com/alertbox/response-times.html
I looked at some old JMeter results from when I wrote and ran a suite of performance tests against a web service. I'll attach some of them below, it's not apples-to-apples of course but at least another data point.
Times are in milliseconds. Location Req and Map Req had inherent delays of 15000 and 3000 milliseconds, respectively. Invite included a quick call to a mobile carrier's ldap server. Others were pretty standard, mainly database read/writes.
sampler_label count average min max
Data Blurp 2750 185 30 2528
UserAuth 2750 255 41 2025
Get User Acc 820 148 29 2627
Update User Acc 4 243 41 2312
List Invitations 9630 345 47 3966
Invite 2750 591 102 4095
ListBuddies 5500 344 52 3901
Block Buddy 403 419 79 1835
Accept invite 2065 517 94 3043
Remove Buddy 296 411 83 1942
Location Req 2749 16963 15369 20517
Map Req 2747 3397 3116 5926
This software ran on a dedicated, decent virtual machine, tuned the same way production VMs were. The max results were slow, my goal was to find the number of concurrent users we could support so I was pushing it.
I think your numbers are absolutely ok. With regards to all the other stuff that makes websites seem slow, if you haven't, take a look at YSlow. It integrates nicely with Firebug and provides great information about how to make pages load faster.
50-150ms for page load time is fine - you do not need to optimize further at this point.
The fact is, so long as your pages are loading within a second, you are OK.
See this article, which discusses the effects of load times for conversion (100ms increase = 1% for amazon).