I am using selenium and firefox webdriver to test my website, it works well. The only problem is with the computing resource restrictions, I can only run 10 browsers simultaneously in one physical machine, which is not enough for our testing suite.
The big resource bottleneck is at firefox side, it will consume a lot of RAM and CPU when it's running. I am wondering if there is any technique to reduce the RAM and CPU usage so that in one machine I can run 100 firefox browsers in the same time. That will boost my efficiency a lot.
Any ideas?
Selenium is not designed for performance testing, at all.
http://selenium-grid.seleniumhq.org/faq.html#would_you_recommend_using_selenium_grid_for_performanceload_testing
Selenium Grid can go so much to help you by ensuring the tests are done in parallel, but this is not what Selenium was created for, and the bottleneck of browser performance & RAM usage will be a problem with Selenium.
A better solution would be to use an application devoted to performance testing. I've used Redgate's solution as well as the performance testing solution integrated into Visual Studio 2010:
http://www.red-gate.com/products/dotnet-development/ants-performance-profiler/
Assuming you want to test server load and are not relying on AJAX, you can use Apache Jmeter to bombard the server with random requests according to parameters you specify.
Because it's a headless browser that just requests some HTTP content and then throws it away, it can easily scale to 100 instances on a standard desktop.
Related
I have a scenario where I need to do performance test on a web application to check when multiple users login to the app and use, then the UIs are getting rendered in time (fast response). I don't want to use record & replay. Is there a way i can run my existing selenium UI tests from Loadrunner with multiple users ?
Thanks in advance
Amit
It may be possible.
First hurdle is language. Loadrunner supports a few languages. The two I know for which there are also Selenium bindings are Java and C#. If your Selenium scripts are in either one or can be packaged and invoked from JVM or CLR (e.g. Python) it could be possible.
Second hurdle is hardware to support user load. Running the browsers will take a lot of resources. I think you could feasibly run three browsers (Chrome, FF and IE) on a single box. That limits you to three users per agent. If you have hardware to run the number of agents to meet your user load it could be possible. UI rendering times will be meaningless.
The loadrunner script will be a C# or Java project. Add whatever references you need to run your Selenium tests and invoke from there.
However, my gut reaction is this may be rickety. Error handling will add complexity. These alternatives may give a better outcome:
Ditch the idea of using Loadrunner and use a service like Neustar to run multiple Selenium tests. If user load is low, Selenium Grid may also be an option.
Ditch the idea of using Selenium and code in the Loadunner web API.
There is an Eclipse addin for LoadRunner.
You can install this addin, import LoadRunner references and add transaction to your selenium scripts.
Here is a link, that has detailed description about above procedure.
http://www.joecolantonio.com/2014/05/08/how-to-run-a-selenium-script-in-loadrunner/
But yes as earlier said you have to take memory usage into considerations.
No. If you wish to run GUI virtual users in a LoadRunner context then your model is through QuickTest Professional.
The act of rendering (actual drawing of pixels on the screen after an item is downloaded or rasterized) will remain the same for a given speed of video card independent of load on the system as the actual draw to the screen is a local client side activity.
I suspect you mean something different than rendering (drawing on the screen).
Hi I've been using JMeter PerfMon to get CPU and memory statistics. However In a previous test I decided to get help from admins and use the Windows Performance Monitor. I grabbed a sample for memory performance and it seems that PerfMon plugin tends to show an increased usage than the windows monitor:
The Windows Monitor shows a steady 50% average in the graph (and in the scv):
But the PerfMon plugin for Jmeter shows a 70% average:
The problem is that I don't know which results to report. I even double checked that I was targeting the right server, time range and measurement to avoid mistakes, but I don't know why each tool shows the results a bit different. The pattern is the same for other servers and indicators: higher average in Jmeter.
I'd rather use JMeter since its easier to have it setup in all of the servers and collect the results, but I don't know if those are more reliable than the ones from the windows performance monitor. In other examples around the web I saw Jmeter reports that go over 100%, so probably JMeter is already overreacting the results. Anyone knows if JMeter PerfMon are accurate enough to report it instead of the windows performance monitor? This could affect us if we set up a base line in which it could be breached in a JMeter report, but not in a Windows Monitor. Maybe someone out there compared the results of different tools.
I have created a website that allows users to search a database. It is a Perl script that searches oracle using Perl DBI then writes in HTML and JavaScript.
I have found many websites that will quantitatively test the initial loading of the website. I can't help but think that the figures I have are false because the test is not actually performing a search and loading any data.
Are there any tools for testing the speed and performance of the interactive operations of a site beyond its initial load?
You can look to wiring up load testing with something like WebDriver and JMeter. Lots of folks use these or similar tools for just these sorts of scenarios. They're great tools, but require a pretty significant investment in time to get up and running.
You can also use Telerik's Test Studio which makes it easier to quickly get good performance and load tests up and running. Please note I said "easier" and not "easy." Load and performance testing of websites takes anywhere from a moderate amount of work to "OMG! This is nuts!"
Disclaimer: I'm the director of engineering for Test Studio, so I'm a bit biased about it. :)
For load testing you have to user load testing tool like Jmeter or Loadrunner.
Jmeter is an open source tool and Load runner is paid tool but both are user to find load of the website there is other tool also available in the marker which is used to find the load of a website and that tool is free for 1 month.
But you have to use tool to find load of a website
I run selenium tests for a web application we are developing.It eats up all memory at the server and server performance falls exponentially.Most of the times application gets completely down or gets very very slow.
2 hour of Automation script run is enough to permanently halt the server(so that other profiles hosted on the sever do not work).
What can be the possible reason behind it? and How to overcome it?
I use selenium RC for test execution.
I think you need to speak to the IT department, and ask them to analyse the logs to figure out what's different on this particular server. It's not really possible for anybody here to answer your question without a lot more data.
I've been looking at ways people test their apps in order decide where to do caching or apply some extra engineering effort, and so far httperf and a simple sesslog have been quite helpful.
What tools and tricks did you apply on your projects?
I use httperf for a high level view of performance.
Rails has a performance script built in, that uses the ruby-prof gem to analyse calls deep within the Rails stack. There is an awesome Railscast on Request Profiling using this technique.
NewRelic have some seriously cool analysis tools that give near real-time data.
They just made it a "Lite" version available for free.
I use jmeter for session-based testing - it allows very fine-grained control over pages you want to hit, parameters to inject, loops to go through, etc. It's great for simulating how many real users your site can handle, rather than just performance testing a set of static urls. You can distribute tests over multiple machines quite easily by loading up the jmeter-server on computers with publicly accessible IP's. I have found some limitations in the number of users/threads any one machine can throw at a server at once (it depends on the test), but jmeter has helped my team improve our apps capacity for users to 6x.
It doesn't have any fancy graphing -- I actually use my own in-house graphing with gruff that can do performance analysis on request time for certain pages and actions.
I'm evaluating a new opensource web page instrumentation and measurement suite called Jiffy. It's not particularly for ruby, it works for all kind of webapps
There's also a Jiffy Firebug Extension for rendering the metrics inside the browser.
I also suggest you look at Browser Mob for load testing.
A colleague of mine has also posted some interesting thoughts on this.