How do i make a Selenium WebDriver run for many hours(CHO) without it causing a crash / OutOfMemory problems? - performance

I am using selenium-2.30.0 to run a single test(on windows) which runs for many hours (~ 8 Hrs). I was using the FF driver, but it runs out of memory after just 45 minutes or less, & the test execution just hangs. I was unable to use HTMLUnitDriver (i thought a pure java solution was the answer) to run the same way as the FF driver (as it needs to wait for page loads & I definitely didn't want to put random thread sleeps in my code or implement any new function by extending the HTMLUnitDriver).
I cannot break the test case to multiple smaller units.
I cannot reload the driver as and when i see heavy memory utilization
Is there any way to get this working?

I found this link:creating-firefox-profile-for-your-selenium-rc-tests, & was quite helpful. Created a new firefox profile with absolute minimal settings, & the test has been running without issues for the last 4 hours. Thanks a lot for the help guys !

What sort of testing are you doing? Selenium is used primarily for Acceptance tests. It sounds like what you're trying to do is more like a soak test on your system.
If that's the case, take a look at JMeter, it's much more suited to this type of work. However, a rather significant difference between the two technologies is that JMeter works at the protocol (HTTP Request) level as opposed to Selenium's use of the rendered HTML.

What does crash, your Java test code or Firefox itself? If it's the Java test code, then are you sure that you're not leaking memory? Or maybe the memory leak is in the server side?

Related

JMeter with Selenium WebDriver Sampler actually overload the overload PC

I run test in Jmeter with Selenium WebDriver Sampler
on Linux X86 and java SDK 11
The test run with 50 users.
I run it from the command line with non Gui mode and with Chrome headless mode.
but after 5 minutes the CPU going up to 100% and the memory almost full (8G).
What can I do to improve it?, I need run the test with 200 users and up.
Thanks,
Izik
You're expecting too much from your machine.
Although there are no specific RAM requirements defined at the Chrome System Requirements page, on my machine a single Chrome instance with a single tab in "porno" mode consumes almost 1GB of RAM. And this is given http://example.com page open, not modern web application with tons of JavaScript
I bet if you run the following command on your machine - you will get at least 3GB
(Get-Process chrome | Measure-Object WorkingSet -sum).sum
As per WebDriver Sampler tutorial
Note: It is NOT the intention of this project to replace the HTTP Samplers included in JMeter. Rather it is meant to compliment them by measuring the end user load time.
So my expectation is that you should be conducting the load using HTTP Request samplers and forget about using real browsers (or use 1 instance to collect client-side performance metrics). Just consider following recommendations from How to make JMeter behave more like a real browser article to ensure that protocol-based JMeter test has the same network footprint as real browser produces.
If you have to use real browsers for performance testing you won't be able to launch 200 browser instances on a machine with 8GB of RAM, you will have to find another 30-40 machines of this specifications and go for Distributed Testing.
This is expected, Since you are using Selenium it will use the JVM and browser which will consume a lot of memory. I would suggest you to distribute the test in multiple machine if you are going the Selenium route for load testing. This way you will be able to load test for more more number of users.
The best would be to stick to HHTP sampler as suggested above. You could also record and make necessary changes.

How can I execute same scenario or feature 10 times concurrently in multiple browser to check the performance of website

I have a requirement for the performance of webpage,
example : I have a logging page need to run 10 concurrent execution with different users to test the performance of that page.
I have gone through ruby-jmeter gem but It opened only one browser, but in jmeter log it is showing more than 10 sessions.
Can anyone help on this one
Thank you
To run multiple sessions simultaneously you can use the parallel tests gem: https://github.com/grosser/parallel_tests/
Standard disclaimer: there are a lot of variables in evaluating performance and it is extremely difficult to control those variables sufficiently to get useful information on performance using Selenium or Watir.

LoadRunner11.03 performance issue?

Recently, I received a PC installed LoadRunner 11.03(perhaps patch 3) from my client and watched a web performance with it by long-run test.
In multiple user test, it seems not to work on proper performance because my web's performance monitor couldn't reach any limitation, usage of CPU, network bands, disk usage per minute, usage of Memory. Only waiting threads was little bad, but it was not obvious.
It seems a sequential behavior rather than a parallel access.
(No error occured.)
So I though it was not problem of servers, but the client have some problem having prevent to be acting parallel access for some reasons.
I don't have proper HP passport ID, I can't access the LoadRunner patches' website.
Please notice me if not LoadRunner patches, especially patch 4 or higher , let it show such the above behavior or not.
Ok, it sounds like you are just running a script in VUGen. If that is the case I am guessing (based on what you wrote, correct me if I'm wrong) you are running a script in the Virtual User Generator and not in the Controller. LoadRunner is actually a suite of multiple applications. The Virtual User Generator is the script development application, a development environment like Eclipse. It is single threaded and running a script there is meant only to test the script individually.
To run a multi-threaded test you need to use the Controller app and develop a test scenario, assign multiple virtual users (the LR term for concurrent threads) to each script you want to run and execute the test from the Controller. You can configure machines to be the Load Generators (another app set up to run as a process or service) and push out the test from the Controller to the Generators.

IIS7 Performance Issues for Web-services

We are experiencing slow processing of requests under heavy load. When looking at the currently running requests during these bursts I can see many requests to our web-service code.
The number of requests is not that large but they appear to be stuck in a preprocessing state. Below is an example:
We are running an IIS7 app pool in classic mode due to the need to support some legacy code.
Other requests continue to be processed but these stuck requests gradually seem to fill up the available threads leading to slow processing of other pages.
Does anyone have any idea on where these requests are getting stuck.
There appears to be no resource issue with the DB and the requests state show suggest this is all preprocessing.
We have run load tests on the code involved on local machines and can not replicate the issue.
Another possible factor is we are making use of MVC and UrlRouting.
Many thanks for any help.
Some issues only happen at production servers unfortunately, as load test can never simulate real world users.
You can try to capture hang dumps when performance is bad, and then analyze them (on your own or open a support case via http://support.microsoft.com to work with Microsoft support).
Usually you might have hit the famous thread pool bottleneck, http://support.microsoft.com/kb/821268. Dump analysis can easily tell the culprit and help locate a solution.
Why not move them into their own AppPool to separate them from the Classic ASP app - you'll then have more options to tune.

TDD Scenario: Looking for advice

I'm currently in an environment where we are parsing data off of the client's website. I want to use my tests to ensure that when the client changes their site, I know when we are no longer receiving the information.
My first approach was to do pure integration tests where my tests hit the client's site and assert that the data was found. However half way through and 500 tests in, the test run has become unbearable and in some cases started timing out. So I cleared out as many tests that I could without loosing the core protection they are providing and I'm down to 350 or so. I'm left with a fear to add more tests to only break all the tests. I also find myself not running the 5+ minute duration (some clients will be longer as this is based on speed of communication with their site) when I make changes anymore. I consider this a complete failure.
I've been putting a lot of thought into this and asking around the office, my thoughts for my next attempt at this is to pull down the client's pages and write tests against these embedded resources in my projects. This will give me my higher test coverage and allow me to go back to testing in isolation. However I would need to be notified when they make changes and then re-pull down the pages to test against. I don't think the clients will adhere to this.
A suggestion was made to me to augment this with a suite of 'random' integration tests that serve the same function as my failed tests (hit the clients site) but in a lot less number than before. I really don't like the idea of random testing, where the possibility of sometimes getting red lights and some times getting green lights with the same code. But this so far sounds like the best idea I've heard to still gain the awareness of when the client's site has changed and my code no longer finds the data.
Has anyone found themselves testing an environment like this? Any suggestions from the testing community for me?
When you say the big test has become unbearable, it suggests that you are running this test suite manually. You shouldn't have to. It should just be running constantly in the background, at whatever speed it takes to complete the suite - and then start over again (perhaps after a delay if there are associated costs). Only when something goes wrong should you get an alert.
If there is something about your tests that causes them to get slower as their number grows - find it and fix it. Tests should be independent of one another, so simply having more of them shouldn't cause individual tests to time out.
My recommendation would be to try to isolate as much as possible the part of code that deals with the uncertainty. This part should be an API that works as a service used by all the other code. This way you would be protecting most of your code against changes.
The stable parts of the code should be unit-tested. With that part being independent from the connection to client's site running the tests should be way quicker and it would also make those tests more reliable.
The part that has to deal with the changes on the client's websites can be reduced. This way you are not solving the problem but at least you're minimising it and centralising it in only one module of your code.
Suggesting to the clients to expose the data as a web service would be the best for you. But I guess that doesn't depend on you :P.
You should look at dividing your tests up, maybe into separate assemblies that can be run independently. I typically have a unit tests assembly and a slower running integration tests assembly.
My unit tests assembly is very fast (because the code is tested in isolation using mocks) and gets run very frequently as I develop. The integration tests are slower and I only run them when I finish a feature / check in or if I have a bad feeling about breaking something.
Maybe you could do something similar or even take the idea further and have 3 test suites with the third containing even slower client UI polling tests.
If you don't have a continuous integration server / process you should look at setting one up. This would continuously build you software and execute the tests. This could be set up to monitor check-ins and work in the background, sending out a notification if anything fails. With this in place you wouldn't care how long your client UI polling tests take because you wouldn't ever have to run them yourself.
Definitely split the tests out - separate unit tests from integration tests as a minimum.
As Martyn said, get a Continuous Integration system in place. I use Teamcity, which is excellent, easy to use, free for the first 20 builds, and you can happily run it on your own machine if you don't have a server at your disposal - http://www.jetbrains.com/teamcity/
Set up one build to run on every check in, and make that build run your unit tests, or fast-running tests if you will.
Set up a second build to run at midnight every night (or some other convenient time), and include in this the longer running client-calling integration tests. With this in place, it won't matter how long the tests take, and you'll get a big red flag first thing in the morning if your client has broken your stuff. You can also run these manually on demand, if you suspect there might be a problem.

Resources