Jmeter Best Practices and Reporting - jmeter

I am new to Jmeter. so i want to know about what is jmeter best practices. can anyone explain shortly about Jmeter Best Practices?
Thanks in Advance

The main and only one "best practice" for Performance Testing and its subtypes (load testing, stress testing, soak testing) is: well behaved performance test must represent real application usage by real users as close as possible, otherwise the test doesn't make sense.
So keep it in mind while designing your test plan, for example if you are testing a web application you need to keep in mind that the real user is a real person sitting in front of real computer and using a real browser. So you need to:
Accurately mimic real user test scenario in terms of HTTP Requests
Make sure you properly configure JMeter to handle embedded resources (images, scripts, styles, fonts)
Make sure you handle Cookies
Make sure to send relevant HTTP Headers (User-Agent, Accept-Encoding, etc.)
Make sure JMeter is configured to represent browser cache
Real users need some time to "think" between operations so add reasonable delays using timers
Once you have test ready you can run it in command-line non-GUI mode and analyze results using HTML Reporting Dashboard.

Related

Trying to Performance test an application developed in OJET technology. Which tool/protocol should I use for scripting?

Trying to Performance test an application developed in OJET technology. Which tool/protocol should I use for scripting? I tried HTTP/Web protocol with Jmeter and Load Runner. But that doesn't capture all the requests and responses at the javascript/browser level. Hence I am facing issues in correlating the dynamic values during test design. Hence, scripts fail during the replay. Currently trying to do it with Truclient Web protocol as an alternative. But I need to know which tool/protocol should I use for scripting?
According to OJET looks like this is a web app generator.
If you choose to start with JMeter use post-processor such as regex to catch and save every value that is needed for as arg in the next request.
Don't be afraid of these dynamic values. Try to follow next articles to get the idea.
None of tools will provide you automatic correlation without issue. Nor LoadRunner, nor Jmeter. It is always tricky.
Ask more specific questions when you start facing issue.
Jmeter catch correlations
You need to implement real user using your application with 100% accuracy in terms of network footprint
Both JMeter and/or LoadRunner are not capable of executing client-side JavaScript, the options are in:
Implement these JavaScript-driven network calls using scripting (in JMeter it will be JSR223 Test Elements)
Use a real browser, LoadRunner's Truclient protocol is basically a headless web browser, in JMeter can be integrated with Selenium browser automation framework via WebDriver Sampler
With regards to "which protocol/tool" to use:
Implementing JavaScript calls manually will take extra effort, however your test will consume less resources (CPU, RAM, etc.)
Using real browsers will take less efforts, but the test will consume much more resources (something like 1 CPU core and 2 GB of RAM per user/ browser instance) and you won't have metrics like Connect Time, Latency, etc.
LoadRunner TruClient. This will handle all of the Javascript executions and dynamic elements related to session, state, date/time, object identifiers, ... You will still need to appropriately handle user input items.

Does JMeter end up testing the back-end of a web-application with the front-end as an entry point?

I'm told that we need to do some performance testing on one of our web-applications, so I'm trying to get some JMeter stuff to work, which as far as I know would simulate the HTTP GETS and POSTS. However, one of my colleagues is telling me that if I use it, it'd only accomplish FE testing. But, if I do this, it still is able to create items in the database and interact with the logic, so I figured it should be sufficient for performance testing of the back-end. Her reasoning is that "if it goes through http pages, we can’t tell which affects the performance".
So am I totally wrong? I'm confused.
The whole idea of the load testing thing is to simulate real-life users actions and behavior as close to reality as possible.
In JMeter terms that assumes presence and appropriate configuration of the following test elements:
HTTP Cookie Manager - to represent browser cookies and deal with cookie-based authentication
HTTP Header Manager - to represent browser headers like User-Agent, Accept-Encoding, Accept-Language, Content-Type, etc.
HTTP Cache Manager - browsers download embedded resources like scripts, styles, images, etc. but do it only once, on subsequent requests aforementioned entities are being returned from cache. To simulate this behavior you need to have HTTP Cache Manager
HTTP Request Samplers need to be configured to fetch embedded resources from the web pages and use a separate thread pool for this. See How to make JMeter behave more like a real browser guide for more details on how to configure realistic behavior.
So given JMeter test is good designed and implemented it is quite enough to test backend as well. If during load test you figure out that bottleneck is i.e. database, you may need to load-test the database separately, JMeter is capable of doing this as well, however I'm a strong believer that load testing should be done against environment as close to production as possible and should target the whole system rather than individual components.

include static resources like images, css, js etc in tests

I´ve recently started using JMeter to create load tests for my web applications. I really like the tool, and after watching some videos it was really easy to get started with creating tests.
There´s however one thing that I´m not clear about.
Reading on the JMeter homepage, there´s a "Best practice" section. Among other things, it says:
The most important thing to do is filter out all requests you aren't interested in. For instance, there's no point in recording image requests (JMeter can be instructed to download all images on a page - see HTTP Request ). These will just clutter your test plan.
I´ve seen this on other pages aswell, saying that you shouldn´t include requests for images or any other static resources in your tests. However I´ve still not been able to find a single page which gives a good explanation as to WHY you shoudn´t include static resources.
Sure, JMeter isn´t a browser, but requests for static resources would no doubt affect performance of your application? Can someone please give me a good explanation :-)
It all depends on what you are trying to test.
In general, there are two types of performance test I do with JMeter: specific tests, where I look at things that I'm worried about, and "safety net" tests, where I measure the entire application to make sure it does indeed work the way I expect it to.
Specific tests nearly always deal with the dynamic aspects of the web application - the server-side code (.aspx, .php, .jsp etc.). This is where most applications have their bottlenecks - the effort to run a server-side script is many, many times higher than the effort to retrieve a CSS file from disk and serve it up to the browser without any additional processing. If I'm testing the server-side scripts, I don't want to also load the assets - because they clutter up the tests, and consume bandwidth at the test client end. I don't want my tests to fail because my JMeter server is downloading a 5MB video file on each thread and consuming all the bandwidth, when what I'm actually trying to do is see how many logins per second the server can support.
There's very little point in testing your webserver's ability to serve static files - Microsoft, the Apache team, whoever, have already done a brilliant job at that; unless you have a very specific concern, there are better ways to spend your testing budget.
Safety net tests put the whole thing together to prove that it all really does work the way I expect it. Usually, I run these on a production(like) environment, so I have a CDN, production-grade hardware, and the "live" application config. I usually employ a cloud-based testing service for this, so I can see performance from different locations, and generate enough load to stress production-grade kit. You could use JMeter for this (and there are a couple of JMeter Cloud services I've used in the past). It's expensive, it may require downtime, and you should only do it as a safety net.
When you want to do a proper performance test (especially a stress test), where you need to produce your application's response time as a function of number of users/threads in time, you need to include all static resources, just as jMeter Proxy saved them when you recorded your test.
To take browser cache into account you can either use HTTP Cache Manager or Once Only controller, so that each thread only downloads static stuff once, with their first request.
HTTP Cache Manager is the recommended way to go and much easier to set up, simply include it in your test, as the first child of a thread group.
Once Only controller is regularly used when you need to log-in users only on their first request.
BTW parametrization of non-static HTTP Requests is recommended, you won't e.g. search for the same product name or e.g. buy the same book every time, that's usually the starting point which can give you a general idea of performance efficiency of you app.
Hope this helps...
Unless your app is used by casual visitors who only look at one page and then go, there is a good chance that the static resources are being downloaded once, and then served from the browser cache.
Moreover, although static resources affect the bandwidth and the overall response time for the user, they should have a small impact on the server load, and they might not be the kind of thing that you want to measure.
I guess you need to try mimicking what actual, real users would do with the application.

What differentiate virtual users / real users when performing load test?

Anyone can point out the difference between virtual user and real user?
In the context of web load testing, there are a lot of differences. A virtual user is a simulation of human using a browser to perform some actions on a website. One company offers what they call "real browser users", but they, too, are simulations - just at a different layer (browser vs HTTP). I'm going to assume you are using "real users" to refer to humans.
Using humans to conduct a load test has a few advantages, but is fraught with difficulties. The primary advantage is that there are real humans using real browsers - which means that, if they are following the scripts precisely, there is virtually no difference between a simulation and real traffic. The list of difficulties, however, is long: First, it is expensive. The process does not scale well beyond a few dozen users in a limited number of locations. Humans may not follow the script precisely...and you may not be able to tell if they did. The test is likely not perfectly repeatable. It is difficult to collect, integrate and analyze metrics from real browsers. I could go on...
Testing tools which use virtual users to simulate real users do not have any of those disadvantages - as they are engineered for this task. However, depending on the tool, they may not perform a perfect simulation. Most load testing tools work at the HTTP layer - simulating the HTTP messages passed between the browser and server. If the simulation of these messages is perfect, then the server cannot tell the difference between real and simulated users...and thus the test results are more valid. The more complex the application is, particularly in the use of javascript/AJAX, the harder it is to make a perfect simulation. The capabilities of tools in this regard varies widely.
There is a small group of testing tools that actually run real browsers and simulate the user by pushing simulated mouse and keyboard events to the browser. These tools are more likely to simulate the HTTP messages perfectly, but they have their own set of problems. Most are limited to working with only a single browser (i.e. Firefox). It can be hard to get good metrics out of real browsers. This approach is far more scalable better than using humans, but not nearly as scalable as HTTP-layer simulation. For sites that need to test <10k users, though, the web-based solutions using this approach can provide the required capacity.
There is a difference.
Depends on your jmeter testing, if you are doing from a single box, your IO is limited. You cant imitate lets say 10K users with jmeter in single box. You can do small tests with one box. If you use multiple jmeter boxes that s another story.
Also, how about the cookies, do you store cookies while load testing your app? that does make a difference
A virtual user is an automatic emulation of a real users browser and http requests.
Thus the virtual users is designed to simulate a real user. It is also possible to configure virtual users to run through what we think a real users would do, but without all the delay between getting a page and submitting a new one.
This allows us to simulate a much higher load on our server.
The real key differences between virtual user simulations and real users is are the network between the server and thier device as well as the actual actions a real user performs on the website.

Performance testing application for bottle necks using production data

I have been tasked with looking for a performance testing solution for one of our Java applications running on a Weblogic server. The requirement is to record production requests (both GET and POST including POST data) and then run these requests in a performance test environment with a copy of the production database.
The reasons for using production requests instead of a test script are:
It is a large application with no existing test scripts so it would be a a large amount of work to write scripts to cover the entire application.
Some performance issues only appear when users do a number of actions in a particular order.
To test using actual user interaction with the system not an estimation at how the users may interact with the system. We all know that users will do things we have not thought of.
I want to be able to fix performance issues and rerun the requests against the fixed code before releasing to production.
I have looked at using JMeters Access Log Sampler with server access logs however the access logs do not contain POST data and the access log sampler only looks at the request URL so it cannot simulate users submitting form data.
I have also looked at using the JMeter HTTP Proxy Server however this can record the actions of only one user and requires the user to configure their browser to use the proxy. This same limitation exist with Tsung and The Grinder.
I have looked at using Wireshark and TCReplay but recording at the packet level is excessive and will not give any useful reports at a request level.
Is there a better way to analyze production performance considering I need to be able to test fixes before releasing to production?
That is going to be a hard ask. I work with Visual Studio Test Edition to load test my applications and we are only able to "estimate" the users activity on the site.
It is possible to look at the logs and gather information on the likelyhood of certain paths through your app. You can then look at the production database to look at the likely values entered in any post requests. From that you will have to make load tests that approach the useage patterns of your production site.
With any current tools I don't think it is possible to record and playback actual user interation.
It is possible to alter your web app so that is records and logs every request and post against session and datetime. This custom logging could be then used to generate load test requests against a test website. This would be some serious code change to your existing site and would likely have performance impacts.
That said, I have worked with web apps that do this level of logging and the ability to analyse the exact series of page posts/requests that caused an error is quite valuable to a developer.
So in summary: It is possible, but I have not heard of any off the shelf tools that do it.
Please check out this Whitepaper by Impetus Technologies on this page.. http://www.impetus.com/plabs/sandstorm.html
Honestly, I'm not sure the task you're being asked to do is even possible, let alone a good idea. Depending on how complex the application's backend is, and how perfect you can recreate the state (ie: all the way down to external SOA services or the time/clock), it may not be possible to make those GET and POST requests reproduce the same behavior.
That said, performance testing against production data is always great, but it usually requires application-specific knowledge that will stress said data. Simply repeating HTTP GETs and POSTs will almost certainly not yield useful results.
Good luck!
I would suggest the following to get the production requests and simulate the accurate workload:
1) Use coremetrics: CoreMetrics provides such solutions using which you can know the application usage patterns. This would help in coming up with an accurate workload model. This model can then be converted into test scripts and executed against a masked copy of production database. This will provide you accurate results about the application performance in realtime.
2) Another option would be creating a small utility using AOP (Aspect oriented apporach) so that it can trace all the requests and corresponding method traces. This would help in identifying the production usage pattern and in turn accurate simulation of workload. AOP frameworks such as AspectJ can be used. This would not require any changes in code. The instrumentation can be done on the fly. The other benefit would be that thi cna only be enabled for a specific time window and then it can be turned off.
Regards,
batterywalam

Resources