JMeter use in sanity testing in production servers - jmeter

I'm using JMeter on development environment and I think of executing sanity tests on production servers.
Sanity of web sites login and other actions.
Is it reasonable to use JMeter on production servers? How to limit JMeter so it won't impact real users? I found only tutorial which doesn't advice it.
Do not run these tests against your production servers unless you know they can handle the load, or you may negatively impact your server's performance.

From JMeter's point of view it doesn't really matter where you run your tests. Running load tests against production environment is very useful as this way you can discover "real" limitations, bottlenecks, integration and interoperabitity problems opposite to load testing in scaled down environments where you can only guess or calculate the anticipated production metrics.
Ideally you should have some form of "staging environment" which is an exact replica of production environment in terms of hardware, software and data.
If you cannot afford having "staging" environment to play with you can run your tests on production, however you need to keep in mind several important constraints to avoid "surprises"
Run your tests in "dead" time when your application real life usage is minimal, i.e. over night or during weekends.
Make sure JMeter test leaves the system at the same state as it was before test, i.e. if you create users, content, data, etc. - make sure you clean it up after the test so your system is not filled with "junk" data used for load testing. So consider using setUp Thread Group for setting up all the necessary test data and tearDown Thread Group to clean up after yourself
Make sure you monitor your servers health so you will be notified when (if) your system is overloaded. You can use JMeter PerfMon Plugin for this.
It would be also good to have AutoStop Listener enabled so JMeter test would stop automatically
Consider adding SMTP Sampler to your test plan so you would be informed in case of unexpected errors.

As a engineering manager I would say: not in my life time ;-)
So what do you want to hear: that it is not a problem?
Only you can tell whether it would be an issue if something behaves different from what you expect.
My advise would be the same as what you are quoting: don't do it. Unless you know what you are doing, and even then...

Related

Jmeter CLI stops tests after sometime. Any ideas?

When I run Jmeter from Windows CLI, after some random time, the tests are being stopped or stuck. I can click on ctrl+C (one time) just to refresh the run but part of the request will be lost during the time it was stuck.
Take a look at jmeter.log file, normally it should be possible to figure out what's wrong by looking at messages there. If you don't see any suspicious entries there - you can increase JMeter's logging verbosity by changing values in logj2.xml file or via -L command-line parameters.
Take a thread dump and see what exactly threads are doing when they're "stuck"
If you're using HTTP Request samplers be aware that JMeter will wait for the result forever and if the application fails to respond at all - your test will never end so you need to set reasonable timeouts.
Make sure to follow JMeter Best Practices
Take a look at resources consumption like CPU, RAM, etc. - if your machine is overloaded and cannot conduct the required load you will need to switch to distributed testing
There are several approaches to debugging a JMeter test which can be combined as a general systematic approach that I capable of diagnosing most problems.
The first thing that I would suggest is running the test within the JMeter GUI to visualize the test execution. For this you may want to add a View Results Tree listener which will provide you with real time results from each request generated:
Another way you can monitor your test execution in real time within the JMeter GUI is with the Log Viewer. If any exceptions are encountered during your test execution you will see detailed output in this window. This can be found under the Options menu:
Beyond this, JMeter records output files which are often very useful in debugging you load tests. Both the .log file and the .jtl file will provide a time stamped history of every action your test performs. From there you can likely track down the offending request or error if your test unexpectedly hangs:
If you do decide to move your test into the cloud using a service that hosts your test, you may be able to ascertain more information through that platform. Here is a comprehensive example on how to debug JMeter load tests that covers the above approaches as well as more advanced concepts. Using a cloud load test provider can provide your test will additional network and machine resources beyond what your local machine can, if the problem is related to a performance bottleneck.

Why it is recommended to run load test in non gui mode in jmeter

I'm monitoring the connect time and latency to connect from jmeter machine while running in GUI mode and that is in within acceptable limit.
Should we strictly follow non GUI mode even though I can able to perform load test with GUI mode?
I'm targeting 250 TPS and able to achieve that ..I have increased my memory and monitoring CPU and memory of load generator is below 60%.
Should I go for non GUI mode ?
The main limitation is that each event in the queue is being handled by a single event dispatch thread which will act as the bottleneck on your JMeter side.
My expectation is that your "250 TPS" look like:
while it should look like:
So check how does your load pattern look like using i.e. Transactions per Second listener (installable via JMeter Plugins Manager)
Also check how does your JVM look like especially when it comes to garbage collection, it can be done via i.e. JVisualVM, most probably you will see the same "chainsaw" pattern
You don't need to follow JMeter best practices, but
you may encounter issues to achieve specifc goals (as TPS)
your machine can't execute GUI or have low resources
you execute JMeter using a script or build tool as Jenkins
Also it's better to be familiar with JMeter CLI (non GUI) and its report capabilities
JMeter supports dashboard report generation to get graphs and statistics from a test plan.
Also it will be needed for using distributed testing
consider running multiple CLI JMeter instances on multiple machines using distributed mode (or not)
CLI also useful for Parameterising tests
The "loops" property can then be defined on the JMeter command-line:
jmeter … -Jloops=12

LoadRunner11.03 performance issue?

Recently, I received a PC installed LoadRunner 11.03(perhaps patch 3) from my client and watched a web performance with it by long-run test.
In multiple user test, it seems not to work on proper performance because my web's performance monitor couldn't reach any limitation, usage of CPU, network bands, disk usage per minute, usage of Memory. Only waiting threads was little bad, but it was not obvious.
It seems a sequential behavior rather than a parallel access.
(No error occured.)
So I though it was not problem of servers, but the client have some problem having prevent to be acting parallel access for some reasons.
I don't have proper HP passport ID, I can't access the LoadRunner patches' website.
Please notice me if not LoadRunner patches, especially patch 4 or higher , let it show such the above behavior or not.
Ok, it sounds like you are just running a script in VUGen. If that is the case I am guessing (based on what you wrote, correct me if I'm wrong) you are running a script in the Virtual User Generator and not in the Controller. LoadRunner is actually a suite of multiple applications. The Virtual User Generator is the script development application, a development environment like Eclipse. It is single threaded and running a script there is meant only to test the script individually.
To run a multi-threaded test you need to use the Controller app and develop a test scenario, assign multiple virtual users (the LR term for concurrent threads) to each script you want to run and execute the test from the Controller. You can configure machines to be the Load Generators (another app set up to run as a process or service) and push out the test from the Controller to the Generators.

TDD Scenario: Looking for advice

I'm currently in an environment where we are parsing data off of the client's website. I want to use my tests to ensure that when the client changes their site, I know when we are no longer receiving the information.
My first approach was to do pure integration tests where my tests hit the client's site and assert that the data was found. However half way through and 500 tests in, the test run has become unbearable and in some cases started timing out. So I cleared out as many tests that I could without loosing the core protection they are providing and I'm down to 350 or so. I'm left with a fear to add more tests to only break all the tests. I also find myself not running the 5+ minute duration (some clients will be longer as this is based on speed of communication with their site) when I make changes anymore. I consider this a complete failure.
I've been putting a lot of thought into this and asking around the office, my thoughts for my next attempt at this is to pull down the client's pages and write tests against these embedded resources in my projects. This will give me my higher test coverage and allow me to go back to testing in isolation. However I would need to be notified when they make changes and then re-pull down the pages to test against. I don't think the clients will adhere to this.
A suggestion was made to me to augment this with a suite of 'random' integration tests that serve the same function as my failed tests (hit the clients site) but in a lot less number than before. I really don't like the idea of random testing, where the possibility of sometimes getting red lights and some times getting green lights with the same code. But this so far sounds like the best idea I've heard to still gain the awareness of when the client's site has changed and my code no longer finds the data.
Has anyone found themselves testing an environment like this? Any suggestions from the testing community for me?
When you say the big test has become unbearable, it suggests that you are running this test suite manually. You shouldn't have to. It should just be running constantly in the background, at whatever speed it takes to complete the suite - and then start over again (perhaps after a delay if there are associated costs). Only when something goes wrong should you get an alert.
If there is something about your tests that causes them to get slower as their number grows - find it and fix it. Tests should be independent of one another, so simply having more of them shouldn't cause individual tests to time out.
My recommendation would be to try to isolate as much as possible the part of code that deals with the uncertainty. This part should be an API that works as a service used by all the other code. This way you would be protecting most of your code against changes.
The stable parts of the code should be unit-tested. With that part being independent from the connection to client's site running the tests should be way quicker and it would also make those tests more reliable.
The part that has to deal with the changes on the client's websites can be reduced. This way you are not solving the problem but at least you're minimising it and centralising it in only one module of your code.
Suggesting to the clients to expose the data as a web service would be the best for you. But I guess that doesn't depend on you :P.
You should look at dividing your tests up, maybe into separate assemblies that can be run independently. I typically have a unit tests assembly and a slower running integration tests assembly.
My unit tests assembly is very fast (because the code is tested in isolation using mocks) and gets run very frequently as I develop. The integration tests are slower and I only run them when I finish a feature / check in or if I have a bad feeling about breaking something.
Maybe you could do something similar or even take the idea further and have 3 test suites with the third containing even slower client UI polling tests.
If you don't have a continuous integration server / process you should look at setting one up. This would continuously build you software and execute the tests. This could be set up to monitor check-ins and work in the background, sending out a notification if anything fails. With this in place you wouldn't care how long your client UI polling tests take because you wouldn't ever have to run them yourself.
Definitely split the tests out - separate unit tests from integration tests as a minimum.
As Martyn said, get a Continuous Integration system in place. I use Teamcity, which is excellent, easy to use, free for the first 20 builds, and you can happily run it on your own machine if you don't have a server at your disposal - http://www.jetbrains.com/teamcity/
Set up one build to run on every check in, and make that build run your unit tests, or fast-running tests if you will.
Set up a second build to run at midnight every night (or some other convenient time), and include in this the longer running client-calling integration tests. With this in place, it won't matter how long the tests take, and you'll get a big red flag first thing in the morning if your client has broken your stuff. You can also run these manually on demand, if you suspect there might be a problem.

Good way to capture/replay sessions from Apache Log?

For performance testing, I would like to capture some traffic from a production server and use that as a basis to replay the request to a test server in order to simulate a realistic load in our development environment. These are all stateless queries, so no issues regarding cookies, sessions, etc.
The Apache log timestamps everything down to a 1 second resolution, but that's not fine enough granularity for our peak times. What's the best way to capture more fine-grained timestamps for replay? And is there some ab-like load generating program that can use this data to replicate load?
Use jmeter.
https://serverfault.com/questions/84041/how-can-i-replay-apache-access-logs-back-at-my-servers-to-do-real-world-load-test
http://jmeter.apache.org/usermanual/component_reference.html#Access_Log_Sampler
As far as granularity with timestamps, you're not going to get better than that. However, you can randomize the time slots within jmeter. Even if your production traffic logs show hits every second, you can tell jmeter to speed that up drastically.
You could capture the network data of a production run, parse it, and then use that as a replay mechanism comparing the results of the production run and the test run (where desired). Oren Eini (Ayende Rahien) talks about something quite similar on his blog.
I know that there is (or was) a tool that allowed you to do load/performance testing based on recorded sessions, but I can't find it right now :(.
You can also use BadBoy to capture sessions to replay w/ JMeter:
http://www.badboysoftware.biz/docs/jmeter.htm

Resources