selenium run slows down application performance drastically - performance

I run selenium tests for a web application we are developing.It eats up all memory at the server and server performance falls exponentially.Most of the times application gets completely down or gets very very slow.
2 hour of Automation script run is enough to permanently halt the server(so that other profiles hosted on the sever do not work).
What can be the possible reason behind it? and How to overcome it?
I use selenium RC for test execution.

I think you need to speak to the IT department, and ask them to analyse the logs to figure out what's different on this particular server. It's not really possible for anybody here to answer your question without a lot more data.

Related

How to run selenium test using Loadrunner

I have a scenario where I need to do performance test on a web application to check when multiple users login to the app and use, then the UIs are getting rendered in time (fast response). I don't want to use record & replay. Is there a way i can run my existing selenium UI tests from Loadrunner with multiple users ?
Thanks in advance
Amit
It may be possible.
First hurdle is language. Loadrunner supports a few languages. The two I know for which there are also Selenium bindings are Java and C#. If your Selenium scripts are in either one or can be packaged and invoked from JVM or CLR (e.g. Python) it could be possible.
Second hurdle is hardware to support user load. Running the browsers will take a lot of resources. I think you could feasibly run three browsers (Chrome, FF and IE) on a single box. That limits you to three users per agent. If you have hardware to run the number of agents to meet your user load it could be possible. UI rendering times will be meaningless.
The loadrunner script will be a C# or Java project. Add whatever references you need to run your Selenium tests and invoke from there.
However, my gut reaction is this may be rickety. Error handling will add complexity. These alternatives may give a better outcome:
Ditch the idea of using Loadrunner and use a service like Neustar to run multiple Selenium tests. If user load is low, Selenium Grid may also be an option.
Ditch the idea of using Selenium and code in the Loadunner web API.
There is an Eclipse addin for LoadRunner.
You can install this addin, import LoadRunner references and add transaction to your selenium scripts.
Here is a link, that has detailed description about above procedure.
http://www.joecolantonio.com/2014/05/08/how-to-run-a-selenium-script-in-loadrunner/
But yes as earlier said you have to take memory usage into considerations.
No. If you wish to run GUI virtual users in a LoadRunner context then your model is through QuickTest Professional.
The act of rendering (actual drawing of pixels on the screen after an item is downloaded or rasterized) will remain the same for a given speed of video card independent of load on the system as the actual draw to the screen is a local client side activity.
I suspect you mean something different than rendering (drawing on the screen).

is there any Impact of Installed JRE on Dojo based application, in terms of performance?

I have an application based on DOJO, with some performance issues on some workstations, not all.
We are trying to find out reason but since the behavior is not consistent its really difficult to pin point the reason for performance issues.
There are machines on which the code just works fine at most times, but then there are machines that would have the issues of getting stuck.
Note: We do not have access to the client machines that are reporting issues.
So one of the things we are looking at is: Installed JRE on the machines.
Please let me know.
Also if you have any suggestions where else I could look, please suggest.
Thanks
Nick
Unless you are running Dojo under Node.js, then in theory you would need some kind of browser or other JavaScript container to run a Dojo application. If we assume that this is a browser based application, then Dojo is running inside of the browser's JavaScript engine.
So it seems very unlikely that the JRE would be effecting the browser's performance, unless to you are also running Java applets in your application.
You might want to look at what could be slowing down these workstations in general.

selenium and web automation test: how to run large volumn stress tests

I am using selenium and firefox webdriver to test my website, it works well. The only problem is with the computing resource restrictions, I can only run 10 browsers simultaneously in one physical machine, which is not enough for our testing suite.
The big resource bottleneck is at firefox side, it will consume a lot of RAM and CPU when it's running. I am wondering if there is any technique to reduce the RAM and CPU usage so that in one machine I can run 100 firefox browsers in the same time. That will boost my efficiency a lot.
Any ideas?
Selenium is not designed for performance testing, at all.
http://selenium-grid.seleniumhq.org/faq.html#would_you_recommend_using_selenium_grid_for_performanceload_testing
Selenium Grid can go so much to help you by ensuring the tests are done in parallel, but this is not what Selenium was created for, and the bottleneck of browser performance & RAM usage will be a problem with Selenium.
A better solution would be to use an application devoted to performance testing. I've used Redgate's solution as well as the performance testing solution integrated into Visual Studio 2010:
http://www.red-gate.com/products/dotnet-development/ants-performance-profiler/
Assuming you want to test server load and are not relying on AJAX, you can use Apache Jmeter to bombard the server with random requests according to parameters you specify.
Because it's a headless browser that just requests some HTTP content and then throws it away, it can easily scale to 100 instances on a standard desktop.

Testing a wide variety of computers with a small company

I work for a small dotcom which will soon be launching a reasonably-complicated Windows program. We have uncovered a number of "WTF?" type scenarios that have turned up as the program has been passed around to the various not-technical-types that we've been unable to replicate.
One of the biggest problems we're facing is that of testing: there are a total of three programmers -- only one working on this particular project, me -- no testers, and a handful of assorted other staff (sales, etc). We are also geographically isolated. The "testing lab" consists of a handful of VMWare and VPC images running sort-of fresh installs of Windows XP and Vista, which runs on my personal computer. The non-technical types try to be helpful when problems arise, we have trained them on how to most effectively report problems, and the software itself sports a wide array of diagnostic features, but since they aren't computer nerds like us their reporting is only so useful, and arranging remote control sessions to dig into the guts of their computers is time-consuming.
I am looking for resources that allow us to amplify our testing abilities without having to put together an actual lab and hire beta testers. My boss mentioned rental VPS services and asked me to look in to them, however they are still largely very much self-service and I was wondering if there were any better ways. How have you, or any other companies in a similar situation handled this sort of thing?
EDIT: According to the lingo, our goal here is to expand our systems testing capacity via an elastic computing platform such as Amazon EC2. At this point I am not sure suggestions of beefing up our unit/integration testing are going to help very much as we are consistently hitting walls at the systems testing phase. Has anyone attempted to do this kind of software testing on a cloud-type service like EC2?
Tom
The first question I would be asking is if you have any automated testing being done?
By this I mean mainly mean unit and integration testing. If not then I think you need to immediately look into unit testing, firstly as part of your build processes, and second via automated runs on servers. Even with a UI based application, it should be possible to find software that can automate the actions of a user and tell you when a test has failed.
Apart from the tests you as developers can think of, every time a user finds a bug, you should be able to create a test for that bug, reproduce it with the test, fix it, and then add the test to the automated tests. This way if that bug is ever re-introduced your automated tests will find it before the users do. Plus you have the confidence that your application has been tested for every known issue before the user sees it and without someone having to sit there for days or weeks manually trying to do it.
I believe logging application activity and error/exception details is the most useful strategy to communicate technical details about problems on the customer side. You can add a feature to automatically mail you logs or let the customer do it manually.
The question is, what exactly do you mean to test? Are you only interested in error-free operation or are you also concerned how the software is accepted at the customer side (usability)?
For technical errors, write a log and manually test in different scenarios in different OS installations. If you could add unit tests, it could also help. But I suppose the issue is that it works on your machine but doesn't work somewhere else.
You could also debug remotely by using IDE features like "Attach to remote process" etc. I'm not sure how to do it if you're not in the same office, likely you need to build a VPN.
If it's about usability, organize workshops. New people will be working with your application, and you will be video and audio recording them. Then analyze the problems they encountered in a team "after-flight" sessions. Talk to users, ask what they didn't like and act on it.
Theoretically, you could also built this activity logging into the application. You'll need to have a clear idea though what to log and how to interpret the data.

Is there any time trace web application?

Is there any time trace web application?
I want use it as a tool for monitor my program productivity.(I mean, how many hours I spend on a project)
edit: I once notice there is one (like a web twitter with time trace), but I forgot its name.
If personal time tracking is what you're after, check out Kimai.
On the other hand if your question is about measuring the response time of your web application, there are other proxy recorders than Firebug: SST Trace Plus and Fiddler come to mind. Fiddler is open source and you can write plug-ins for it.
If you are serious about measuring response time, you want to do it at very high loads though, to find the breaking points of your system/application. There are no good free tools, and most people use HP's LoadRunner (for large functional tests) or Spirent's Avalanche (for very high loads).
Do you mean a tool to monitor you time as a developer or a tool to monitor how quickly your program responds?
If you want to track your own time, try base camp (http://www.basecamphq.com/)
If you want to monitor the performance of an app then #RichieHindle's suggestion (Firebug; answer now deleted) is a good one.
http://www.rescuetime.com/
RescueTime Solo?

Resources