How to track & trend end to end performance (the client experience) - performance

I am trying to figure out how best to track & trend end to end performance between releases. By end to end I mean, what is the experience form the client visiting this app via a browser. This includes download time, dom rendering, javascript rendering, etc.
Currently I am running load tests using Jmeter, which is great to prove application and database capacity. Unfortunately, Jmeter will never allow me to show a full picture of the user experience. Jmeter is not a browser therefore will never simulate javascript and dom rendering impact. IE: if time to first byte is 100ms, but it takes the browser 10 seconds to download assets and render the dom, we have problems.
I need a tool to help me with this. My initial idea is to leverage Selenium. It could run a set of tests (login, view this, create that) and somehow record timings for each. We would need to run the same scenario multiple times and likely through a set of browsers. This would be done before every release and would allow me to identify changes in the experience to the user.
For example, this is what I would like to generate:
action | v1.5 | v1.6 | v1.7
----------------------------------------
login | 2.3s | 3.1s | 1.2s
create user | 2.9s | 2.7s | 1.5s
The problem with selenium is that 1. I am not sure if it is designed for this and 2. it appears that DOM ready or javascript rendering is realllly hard to detect.
Is this the right path? Does anyone have any pointers? Are there tools out there that I could leverage for this?

I think you have good goals, but I would split them:
Measuring DOM rendering, javascript rendering etc. are not really part of "experience from the client visiting this app via a browser", because your clients are usually unaware that you are "rendering dom" or "running javasript" - and they don't care. But they are something I'd want to address after every committed change, not just release to release, because it could be hard to trace degradation back to particular change if such test is not running all the time. So I would put it in continuous integration on build level. See a good discussion here
Then you probably would want to know if server side performance is the same or worsened (or is better). For that JMeter is ideal. Such testing could be done on some schedule (e.g. nightly or on each release) and can be automated using for example JMeter plug-in for Jenkins. If server side performance got worse, you don't really need end-to-end testing, since you already know what will happen.
But if server is doing well, then "end user experience" test using a real browser has a real value, so Selenium actually fits well to do this, and since it can be integrated with any of the testing frameworks (junit, nunit, etc), it also fits into automated process, and can generate some report, including duration (JUnit for instance has a TestWatcher which allows you to add consistent duration measurement to every test).
After all this automation, I would also do a "real end user experience" test, while JMeter performance test is running at the same time against the same server: get a real person to experience the app while it's under load. Because people, unlike automation, are unpredictable, which is good for finding bugs.

Regarding "JMeter is not a browser". It is really not a browser, but it may act like a browser given proper configuration, so make sure you:
add HTTP Cookie Manager to your Test Plan to represent browser cookies and deal with cookie-based authentication
add HTTP Header Manager to send the appropriate headers
configure HTTP Request samplers via HTTP Request Defaults to
Retrieve all embedded resources
Use thread pool of around 5 concurrent threads to do it
Add HTTP Cache Manager to represent browser cache (i.e. embedded resources retrieved only once per virtual user per iteration)
if your application is build on AJAX - you need to mimic AJAX requests with JMeter as well
Regarding "rendering", for example you detect that your application renders slowly on a certain browser and there is nothing you can do by tuning the application. What's next? You will be developing a patch or raising an issue to browser developers? I would recommend focus on areas you can control, and rendering DOM by a browser is not something you can.
If you still need these client-side metrics for any reason you can consider using WebDriver Sampler along with main JMeter load test so real browser metrics can also be added to the final report. You can even use Navigation API to collect the exact timings and add them to the load test report
See Using Selenium with JMeter's WebDriver Sampler to get started.
There are multiple options for tracking your application performance between builds (and JMeter tests executions), i.e.
JChav - JMeter Chart History And Visualisation - a standalone tool
Jenkins Performance Plugin - a Continuous Integration solution

Related

how to perform load test of web application that involves user clicking actions

I need to perform load test on our web application that involves user interaction to the web pages. So i have written selenium scripts to
handle the click events.For,example we have functionalities like some users has to perform signup/registering to our site,some has to login and perform click actions and logout, and a set of users has to visit our home page, click on multiple URL links available on the home page.
In JMeter,i have added one threadgroup each for above mentioned functionalities.Each threadgroup has "JUnit Request Sampler" which calls the selenium methods that has code for performing click actions.
I have setup the "Thread Properties" for the threadgroups to run 200 threads per minute over a span of 5 to 10 mins[target is for 15 mins]
Default browser i am using is "Firefox".I have also set the JMeter properties such that i wouldn't landup into any memory issues.
I am running the scripts in non-gui mode and collecting the result into a jtl file.
The problem i am facing here is, while scripts are executing multiple browsers openup and as each page involves click events,some of the clicks are not happening correctly resulting in increase in error count. If i use very minimal number of threads with time delay then no errors are seen.
I have tried with distributed mode testing as well but there was no major reduce in error count.
I am looking forward for a solution or rather suggestions that can help me out to have minimal or zero errors while running JMeter scripts that involves user interactions and achieve intended load or thereafter increase the load.
Regards
Praveena
As per WebDriver Tutorial it is not recommended to use WebDriver to conduct the load onto the web application
From experience, the number of browser (threads) that the reader creates should be limited by the following formula:
C = B + 1
where C = Number of Cores of the host running the test
and N = Number of Browser (threads).
Looking into Firefox browser requirements I strongly doubt you have a computer with 200 cores and 400 GBs of RAM therefore my expectation is that you should use JMeter's HTTP Request samplers to create the load.
Browsers don't do any magic, they send HTTP Requests and render the responses, so well-behaved protocol-based JMeter tests will look just like a real browser for the application under test. So I would recommend converting your Selenium tests into "pure" JMeter tests, you can leave 1-2 browsers to measure end-user experience when the application is under the load.

JMeter Load Testing Time Verification

I use JMeter for checking load testing.
I note a time with stopwatch when i check load time personally it was
8.5 seconds
when i run same case with JMeter it gave load time of 2 seconds
There is huge difference between them, How can i verify the actual time?
e.g : if one user taking 9 seconds to load the form while in JMeter it is given load time 2 seconds
Client time is a complex item, as you can see from the clip from the Chrome Developer tools, performance tab, above. There's lots going on at the client which does lead to a difference between the time you see with an HTTP protocol test tool, such as JMETER (and most of the other performance test tools on the planet) and the actual client render.
You can address this Delta in a number of ways:
Run a single GUI Virtual user. Name your timing records such as "Login" and "login_GUI." The delta between the two is your client weight. Make sure to run the GUI virtual user on a dedicated host to avoid resource contention
Run a test with all browsers. This was state of the art in 1995. Because of the resource cost and the skew imposed on trying to figure out the cost of the server response the entire industry shifted to protocol level virtual users. Some are trying to bring back this model as "state of the art." It is not
Ask a performance question earlier, also known as "shift left..." Every developer has these developer tools at their disposal, as does every functional tester. If you find that a client is slow for one user, be curious and use the developer tools to identify, "why?" If you are waiting to multi user performance testing to answer questions related to client weight, then you have waited too long and often will not have the time or resources to change the page architecture in meaningful ways to reduce the client page cost. This is where understanding earlier has tremendous advantages for making changes.
I picked the graphic above deliberately to illustrate the precise challenge you have. Notice, the loading of the components takes less than a tenth of a second. These are the requests that JMETER would be making. But the page takes almost five seconds to "render." Jmeter is not broken, it is working as designed. It is your understanding that needs to change on which tools can be used to pull particular stats for analysis.
You can't compare JMeter load time to browser as is, also because your browser will load JavaScript files and can call JavaScript functions on page load while JMeter doesn't execute JavaScript.
JMeter is not a browser, it works at protocol level. As far as
web-services and remote services are concerned, JMeter looks like a
browser (or rather, multiple browsers); however JMeter does not
perform all the actions supported by browsers. In particular, JMeter
does not execute the Javascript found in HTML pages. Nor does it
render the HTML pages as a browser does (it's possible to view the
response as HTML etc., but the timings are not included in any
samples, and only one sample in one thread is ever displayed at a
time).
Just a side note - you can use plugin to check exact load time in chrome.
Well-behaved JMeter test timing should be equal or similar to real user timing, if there is a 4x times difference - most probably your JMeter configuration is not correct.
Probably the most important. Make sure your HTTP Request samplers are configured to retrieve so called "embedded resources" (images, scripts, styles) which are referenced in the web page
If your application is using AJAX technology make sure you execute AJAX-driven requests as well and add their elapsed time to main sampler using i.e. Transaction Controller.
Make sure you mimic browser's:
Cookies via HTTP Cookie Manager
Headers via HTTP Header Manager
Cache via HTTP Cache Manager
Assuming all above you should be receiving similar to real user experience page load time. See How to make JMeter behave more like a real browser article for more detailed information on the above tips.
In addition to the answers provided by James and user7294900, please find these images to help you understand the reason behind the difference in time given by your stop watch and JMeter.
Below image gives the ideology behind how JMeter provides the time.
Below image gives the ideology behind how you have measured the time with
your stop watch.
Notice that there are additional actions performed by the browser when you are taking the time using your stop watch. This is the reason behind the huge difference in time between JMeter and your stop watch.
In addition to this, ensure that you are using the same test environmental conditions for both the tests (like same network conditions, same LG etc.)
Hope this helps!

Page Loading Time in Jmeter

I would like to measure loading time of a document in my testing web app. I have used JMeter for this, but I am getting different values for each run. I am measuring average time in the summary report.
I am not sure, that the value is proper or not.Is this approach is correct or Is there any plugin JMeter available?
I have used HTTP watch to get rendering time, but I can't use that tool for more than 1 user (Load Testing). I am using JMeter 2.13. Could you please help me in this?
With the help of aggregate report or csv / xml results you get required information regarding response times BUT
In Jmeter, Response time = Processing time + Latency(time taken by network while transferring data)
In Browser, Response time = Processing time + Latency + Rendering time
Hence you will found a difference between http watch response times and jmeter response times.
If you need to include rendering times also in your response times, then use tools, like loadrunner (commercial), selenium (open source) and so on. Personally in my opinion client side rendering is not a measurable value, unless all of the users accessing the application are having same configuration of hardware, software and network access. However, while JMeter test running with peak load to the system, manually browse the site using various browsers and with the help of developer tools you can find rendering times.
I am getting different values for each run - this will depends upon test data you are using, server health status, network delays and so on.
I doubt you'll be able to get 2 fully identical test run results, there always will be some form of fluctuation caused by underlying hardware and software implementations. You should be receiving similar results with some statistical noise.
If this is not the case, your JMeter test might be misconfigured. From "realness" perspective mind the following configuration:
Make sure you have Retrieve All Embedded Resources from HTML Files box checked and you Use concurrent pool. The easiest way to configure it for all the samplers is using HTTP Request Defaults
Add HTTP Cache Manager to your test plan. Previous setting "tells" JMeter to fetch embedded resources like scripts, styles, etc. from the pages. Real browsers do it as well but they do it only once, on subsequent requests these resources are being returned from browser's cache.
Add HTTP Cookie Manager to your test plan. It represents browser cookies, enables cookie-based authentication and maintains sessions.
Add HTTP Header Manager to represent browser headers like User-Agent, Content-Type, encoding, etc.
When you use a straight HTTP Protocol layer virtual user, independent of the tool (Jmeter, LoadRunner, SOASTA, Grinder, ...) then what you will be timing will be the request/response information coming from the server with very low coloration from the local processing on the client for JavaScript and the final "drawing on the screen" which is rendering.
Up until the point where the server is degraded due to number of requests or network limitations the only area where you can tune is in the page architecture, which if you are waiting to the last 100 yards before deployment to address then you are likely in trouble.
Steve Souders has written quite a bit on the subject of page architecture in his books "High Performance Websites" and related works. In short, the rule of thumb comes down to making fewer requests, smaller responses and serving the data from the closest possible location to the client. These have the effect of minimizing the most expensive finite resource to a web client, the network. For instance, a browser sprite reduces the number of calls for images, minification and compression reduce the size of the transmission and a CDN changes the number of hops to the requested item to a location closer to the end client.
In order to affect changes to page architecture you need to move upstream into your development cycle and your functional testing cycle. You will need to work with development to implement hard gates where code/pages cannot be submitted to the project without first passing performance gates related to design. Your development team and functional testing members will need to respect those gates. As to what the gates should be, I refer you back to the works of Mr Souders as a great source of data for construction of your gate rules.
This gets you to the level of "works for one: Performant for one." Then you can use that as a known good to answer the questions related to server scalability and at which point the service to the client from requests begins to degrade. If you have a CDN in your organization, be sure to take that into account in your test model, for if you do not then you will overload your server vs production.
As far as actual speeding of the "rendering" or drawing on the screen? You need to purchase a faster video card barring changes from the browser manufacturer. Speeding up JavaScript? Make sure that all of your JavaScript is as small and as lean as possible. Have your functional test team test on very dirty browsers with lots of add-ins as well as lower powered hardware for a view of maximum out of spec response. If you need a view of what your standard hardware model looks like from your clients (Browser/OS/some hardware into) then you can process the data in your HTTP request logs and specifically the user agent related to client configuration information.

JMeter fails to simulate browser response time

I'm trying to simulate a connection to a website. The goal of the simulation is to collect statistics on page loading time on browser side.
I configured JMeter Flagging the option Retrieve Embedded Resources in order to simulate the real time to load the whole page. The issue is that while from a real Browser i have a response time (let's assume for the page A the response time is 10 seconds) in JMeter I found i response time 20 times higher.
It seems JMeter takes a much longer time to gather embedded resources (e.g. js, images, ...)
Do you have any suggestion for this issue?
Kind Regards
Update 31/07
I discovered some resources are not completely downloaded. Using Firebug i see some components with 0 bytes downloaded that the browsere keep trying to download (but the user do not percieve since the page is loaded). Therefore i suspect JMeter keeps trying downloading it. Is there any chance to set a timeout to overcome this kind of situation?
Update_1 31/07
I figured out that the issue is related with nested iframes. setting httpsampler.max_frame_depth=0 i get the correct time. however i would like to understand the reason of this issue. Do I have to set other paramters?
Disable browser cache and re-run your test in browser.
Jmeter will not store cache, unless otherwise specified.
Hope this will help.
Add a HTTP Cache Manager to your test plan.
Real browsers retrieve images, scripts, styles, etc. but do it only once. In order to simulate browser behavior you need to configure JMeter appropriately.
See How to make JMeter behave more like a real browser guide for more test elements which can be used for this.

Headless automation of IE-browser, tracking site rendering times

I'm need to monitor my sites render times for common tasks (Login, Search etc.). I need something automated that can mimic a users actions on I.E. and be able time how long a page takes to render.
Example automated execution:
1) open headless IE browser
2) go to http://google.com
3) type "stackoverflow"
4) press submit button
5) start timer
6) wait for results page to fully
render
7) stop timer
8) Close IE
9) record results
I need this to run as a scheduled task while the server, without the user logged in.
I have been searching for something to help me do so. Anyone have any experience with this type of thing or know of anything that can accomplish this?
It depends on what you focus, functionality or performance.
Functionality
When monitoring functionality, you aim at automatically ensuring that a web application still works correctly. Usually, this is more part of the continous integration process - and less part of production monitoring. It can be well done with HtmlUnit, Selenium or WebDriver. HttpUnit is no longer recommended (API more low-level, JavaScript not so well supported, less widely adopted, fewer bug fixes and enhancements).
HtmlUnit simulates a browser. So you can never be sure, that your application behaves exactly identical in a real browser. This is especially important for sophisticated Ajax applications. This is comparable to all the small incompatibilities between FireFox and Internet Explorer. Pros: Headless, easy to understand. Cons: Risk of undetected incompatibilities.
Selenium remote controls a real browser. In our setup, we could not use it headlessly, especially with Internet Explorer. But if you embed it into a virtual machine, it runs headlessly. If your application is reachable through public internet, you might even use Selenium Grid and a preconfigured virtual machine from the Amazon Elastic Cloud EC2. Pros of Selenium: Real world compatibility, easy scripting. Cons: Headless only in virtual machine, performance overhead, more complex runtime setup, stress simulation of concurrent users only in the cloud.
Up to version 1.5, Selenium uses a JavaScript part called Selenium Core to control the browser. If your application has security restrictions for JavaScript, Selenium might not work correctly.
WebDriver uses for each browser the specific interface, e.g. for FireFox an extension and for internet explorer Automation Controls. Additionally it uses the operation system, e.g. for simulating keystrokes. This is more powerful, robust and reliable than Selenium Core. As of Selenium version 2.0, WebDriver is integrated into Selenium. But Selenium 2.0 is still beta.
Performance
You mention measuring with a timer and you mention rendering times. When monitoring performance of a web application, you want to be alerted when real world usage of a application is no longer possible due to overlong answering times.
In this scenario, you are normally not interested in exact results in milliseconds. You can still use one of the tools mentioned above. For example, a browser with Selenium Core is slower than a real world browser - but this is of little relevance for continuous monitoring.
If you absolutely need exact measurements, none of the above is suitable. You should differentiate between client-side duration and network plus server-side duration.
Client-side duration is needed for rendering the HTML and for executing the JavaScript. It does not depend on the number of concurrent users. You can measure it once, e.g. with Firebug. You do not need to monitor it permanently.
Network plus server-side duration is needed for transferring the request to the server, handling the request and generating the response and transferring the response to the client. They vary according to network usage and number of concurrent users. You can exactly measure and monitor them for example with JMeter. But in case of sophisticated Ajax functionality, the simulation of the right client requests in JMeter is a complex task. Pros of JMeter: Exact measurement, possibility to stress an application with many concurrent users. Cons: Limited for Ajax, much effort for request building.
Another option might be Selenium Remote Control (or Selenium in general).
One option for headless automation is to use HtmlUnit. Have a look at this link for more information: Using HtmlUnit on .NET for Headless Browser Automation
The following Headless IE port for PhantomJS is currently in Beta (v0.2):
http://triflejs.org/
Here is a quick intro:
The API is the same as PhantomJs so eventually you'll be able to do the following:
// 1. Create Page Object and navigate to Google
page = require("webpage").create();
page.open("http://www.google.com", function(status) {
if ( status === "success" ) {
// 2. Inject jQuery for DOM operations
page.includeJs("http://ajax.googleapis.com/ajax/libs/jquery/1.6.1/jquery.min.js", function() {
// 3. Start Timer
console.log('Start Timer: ' + (new Date()).getTime());
// 4. Type string and click search
page.evaluate(function() {
$(("input[type=text")[0]).val("stackoverflow");
$("button:contains('Google Search')).click();
});
// 5. Wait for loading and end timer.
page.onLoadFinished = function() {
console.log('Load Finished. End Timer:' + (new Date()).getTime());
phantom.exit();
};
});
}
});

Resources