I've a requirement to load test a web application using loadRunner(Community edition : 12.53 ). Currently I've my test scripts recorded using loadrunner default test script recorder. I'm assuming that, the operations I chose to perform in SUT should actually update the application backend/DB when I'm executing the test scripts. Is this the correct behavior of a load testing scenario?
When I ran my test scripts, I couldn't see any value or nothing updated in the application DB.
I've my test scripts written in C and also manual correlation is applied using web_reg_save_param method.
What might be the things that could go wrong in such a scenario?. Any help would be deeply appreciated.
the operations I chose to perform in SUT should actually update the application backend/DB when I'm executing the test scripts. Is this the correct behavior of a load testing scenario? - Yes this is the correct behaviour.
When I ran my test scripts, I couldn't see any value or nothing updated in the application DB. - Something you might be missing in the correlations. this generally happens when some variable is not correlated properly or gets missed. Or something like timestamp that you might think is irrelevant but needs to be taken care of.
Related
I'm using JMeter for integration and non-regression testing.
The tests are automated and reports are working.
But since it is scenario testing and not performance testing the report doesn't give real business added value for that kind of tests.
My question: Is there any way to have a scenario (transaction controller based)reporting?
For the moment, to have some more meaningful result, transactions controllers and dummy sampler are used.
What we would like to have is the number of success/failure scenarios of the last test run. And also an history of success/failures per test run (1 by day).
Thank you for your advices.
The easiest way of getting the things done is putting your JMeter test under Jenkins orchestration so it will be automatically executed based on a VCS hook or according to the Schedule
Once done you will be able to utilize Jenkins Performance Plugin which adds test results trends charts and ability to mark build as unstable/failed depending on various criteria.
If I am not wrong, you want to create a suite based on particular test cases. like if single case include execution of more than 1 request in a single execution.
If this is the case, you can simple create a test fragment through jmeter gui, and copy all the samplers in single fragment.
Now to control their execution you can use any controller of your choice, i would suggest you to use module controller for http samplers.
I'm using JMeter on development environment and I think of executing sanity tests on production servers.
Sanity of web sites login and other actions.
Is it reasonable to use JMeter on production servers? How to limit JMeter so it won't impact real users? I found only tutorial which doesn't advice it.
Do not run these tests against your production servers unless you know they can handle the load, or you may negatively impact your server's performance.
From JMeter's point of view it doesn't really matter where you run your tests. Running load tests against production environment is very useful as this way you can discover "real" limitations, bottlenecks, integration and interoperabitity problems opposite to load testing in scaled down environments where you can only guess or calculate the anticipated production metrics.
Ideally you should have some form of "staging environment" which is an exact replica of production environment in terms of hardware, software and data.
If you cannot afford having "staging" environment to play with you can run your tests on production, however you need to keep in mind several important constraints to avoid "surprises"
Run your tests in "dead" time when your application real life usage is minimal, i.e. over night or during weekends.
Make sure JMeter test leaves the system at the same state as it was before test, i.e. if you create users, content, data, etc. - make sure you clean it up after the test so your system is not filled with "junk" data used for load testing. So consider using setUp Thread Group for setting up all the necessary test data and tearDown Thread Group to clean up after yourself
Make sure you monitor your servers health so you will be notified when (if) your system is overloaded. You can use JMeter PerfMon Plugin for this.
It would be also good to have AutoStop Listener enabled so JMeter test would stop automatically
Consider adding SMTP Sampler to your test plan so you would be informed in case of unexpected errors.
As a engineering manager I would say: not in my life time ;-)
So what do you want to hear: that it is not a problem?
Only you can tell whether it would be an issue if something behaves different from what you expect.
My advise would be the same as what you are quoting: don't do it. Unless you know what you are doing, and even then...
I'm currently in an environment where we are parsing data off of the client's website. I want to use my tests to ensure that when the client changes their site, I know when we are no longer receiving the information.
My first approach was to do pure integration tests where my tests hit the client's site and assert that the data was found. However half way through and 500 tests in, the test run has become unbearable and in some cases started timing out. So I cleared out as many tests that I could without loosing the core protection they are providing and I'm down to 350 or so. I'm left with a fear to add more tests to only break all the tests. I also find myself not running the 5+ minute duration (some clients will be longer as this is based on speed of communication with their site) when I make changes anymore. I consider this a complete failure.
I've been putting a lot of thought into this and asking around the office, my thoughts for my next attempt at this is to pull down the client's pages and write tests against these embedded resources in my projects. This will give me my higher test coverage and allow me to go back to testing in isolation. However I would need to be notified when they make changes and then re-pull down the pages to test against. I don't think the clients will adhere to this.
A suggestion was made to me to augment this with a suite of 'random' integration tests that serve the same function as my failed tests (hit the clients site) but in a lot less number than before. I really don't like the idea of random testing, where the possibility of sometimes getting red lights and some times getting green lights with the same code. But this so far sounds like the best idea I've heard to still gain the awareness of when the client's site has changed and my code no longer finds the data.
Has anyone found themselves testing an environment like this? Any suggestions from the testing community for me?
When you say the big test has become unbearable, it suggests that you are running this test suite manually. You shouldn't have to. It should just be running constantly in the background, at whatever speed it takes to complete the suite - and then start over again (perhaps after a delay if there are associated costs). Only when something goes wrong should you get an alert.
If there is something about your tests that causes them to get slower as their number grows - find it and fix it. Tests should be independent of one another, so simply having more of them shouldn't cause individual tests to time out.
My recommendation would be to try to isolate as much as possible the part of code that deals with the uncertainty. This part should be an API that works as a service used by all the other code. This way you would be protecting most of your code against changes.
The stable parts of the code should be unit-tested. With that part being independent from the connection to client's site running the tests should be way quicker and it would also make those tests more reliable.
The part that has to deal with the changes on the client's websites can be reduced. This way you are not solving the problem but at least you're minimising it and centralising it in only one module of your code.
Suggesting to the clients to expose the data as a web service would be the best for you. But I guess that doesn't depend on you :P.
You should look at dividing your tests up, maybe into separate assemblies that can be run independently. I typically have a unit tests assembly and a slower running integration tests assembly.
My unit tests assembly is very fast (because the code is tested in isolation using mocks) and gets run very frequently as I develop. The integration tests are slower and I only run them when I finish a feature / check in or if I have a bad feeling about breaking something.
Maybe you could do something similar or even take the idea further and have 3 test suites with the third containing even slower client UI polling tests.
If you don't have a continuous integration server / process you should look at setting one up. This would continuously build you software and execute the tests. This could be set up to monitor check-ins and work in the background, sending out a notification if anything fails. With this in place you wouldn't care how long your client UI polling tests take because you wouldn't ever have to run them yourself.
Definitely split the tests out - separate unit tests from integration tests as a minimum.
As Martyn said, get a Continuous Integration system in place. I use Teamcity, which is excellent, easy to use, free for the first 20 builds, and you can happily run it on your own machine if you don't have a server at your disposal - http://www.jetbrains.com/teamcity/
Set up one build to run on every check in, and make that build run your unit tests, or fast-running tests if you will.
Set up a second build to run at midnight every night (or some other convenient time), and include in this the longer running client-calling integration tests. With this in place, it won't matter how long the tests take, and you'll get a big red flag first thing in the morning if your client has broken your stuff. You can also run these manually on demand, if you suspect there might be a problem.
I'm using testcomplete Keyword test for a lot of UI test cases. Quite a lot of them has the same steps.
Is there any Macro functionality which can add multiple preset actions/checkpoints easily?
Sure, you can call another keyword test using the Run Keyword Test operation or a script function using the Run Script Routine operation. Both operations allow specifying parameters for a test. Also, you can use the Run Test operation to run any item that can be treated as a separate test (keyword or script test, network suite job or task, a load test).
Moreover, I think that you will find it useful the Data-Driven Testing functionality of TestComplete that allows running a test for every record in a specified data source. Find more information on this feature in the Data-Driven Testing help topic. Videos demonstrating data-driven approach can be found here and here.
I am running some unit test that persist documents into the MongoDb database. For this unit test to succeed the MongoDb server must be started. I perform this by using Process.Start("mongod.exe").
It works but sometimes it takes time to start and before it even starts the unit test tries to run and FAILS. Unit test fails and complains that the mongodb server is not running.
What to do in such situation?
If you use external resource(DB, web server, FTP, Backup device, server cluster) in test then it rather integration test then unit test. It is not convenient and not practical to start that all external resources in test. Just ensure that your test will be running in predictable environment. There are several ways to do it:
Run test suite from script (BAT,
nant, WSC), which starts MongoDB
before running test.
Start MongoDB on server and never shut
down it.
Do not add any loops with delays in your tests to wait while external resource is started - it makes tests slow, erratic and very complex.
Can't you run a quick test query in a loop with a delay after launching and verify the DB is up before continuing?
I guess I'd (and by that I mean, this is what I've done, but there's every chance someone has a better idea) write some kind of MongoTestHelper that can do a number of things during the various stages of your tests.
Before the test run, it checks that a test mongod instance is running and, if not, boots one up on your favourite test-mongo port. I find it's not actually that costly to just try and boot up a new mongod instance and let it fail as that port is already in use. However, this very different on windows, so you might want to check that the port is open or something.
Before each individual test, you can remove all the items from all the tested collections, if this is the kind of thing you need. In fact, I just drop all the DBs, as the lovely mongodb will recreate them for you:
for (String name : mongo.getDatabaseNames()) {
mongo.dropDatabase(name);
}
After the tests have run you could always shut it down if you've chosen to boot up on a random port, but that seems a bit silly. Life's too short.
The TDD purists would say that if you start the external resource, then it's not a unit test. Instead, mock out the database interface, and test your classes against that. In practice this would mean changing your code to be mockable, which is arguably a good thing.
OTOH, to write integration or acceptance test, you should use an in-memory transient database with just your test data in it, as others have mentioned.