I run my tests basically on 3 browsers for now (IE9,FF,Chrome) and just made a research about time needed to run them !
My conclusion is that, more or less, a test in FF needs +-5 mins, in Chrome +-4 and in IE +-12
Some tests need more and some other less but IE32 always needs more than double of other browsers
I know it's normal IE is the slowest one but do You think such big difference is normal ?
I use testNg + selenium grid on a remote machine Win7 64 bits.
There isn't a clear question here :( but...
In general you may be able to help your speed in IE by avoiding use of XPath locators - use lookups by id or css selectors or whatever in their place.
Related
I am working on a proof of concept and I need to measure the rendering time of a simple website (just a HTML document and one CSS file) 1000 times in a browser. Is there a simple and straightforward tool for this?
I know there are some highly complicated tools with an enormous learning curve, but I don't have the whole week to tinker with it. I don't need anything else just the rendering time, exactly as Chrome's Performance tool displays it in milliseconds, then calculate an average.
If someone could tell me how to find the total rendering time of the page in the (quite enormous) JSON output of the Performance tool, I'd be happy with that. I can have a macro recorder clicking the Refresh button all night. Though I guess there's a way to get it done from the command prompt - any advice is appreciated on that too!
The 'Audits' tab in Chrome's dev tools allows you to run a lighthouse performance audit, which will provide you some key metrics as defined by Google (such as time to interactive): https://developers.google.com/web/tools/lighthouse/.
You can run it from the command line too, which should make it somewhat straightforward to repeat it as needed in your scenario and perhaps even integrate it as a test: https://developers.google.com/web/tools/lighthouse/#cli
I'm writing a Chrome extension and I want to measure how it affects performance, specifically currently I'm interested in how it affects page load times.
I picked a certain page I want to test, recorded it with Fiddler and I use this recording as the AutoResponder in Fiddler. This allows me to measure load times without networking traffic delays.
Using this technique I found out that my extension adds ~1200ms to the load time. Now I'm trying to figure out what causes the delay and I'm having trouble understanding the DevTools Performance results.
First of all, it seems there's a discrepancy in the reported load time:
On one hand, the summary shows a range of ~13s, but on the other hand, the load event arrived after ~10s (which I also corroborated using performance.timing.loadEventEnd - performance.timing.navigationStart):
The second thing I don't quite understand is how the number add up (or rather don't add up). For example, here's a grouping of different categories during load:
Neither of this columns sums up to 10s nor to 13s.
When I group by domain I can get different rows for the extension and for the rest of the stuff:
But it seems that the extension only adds 250ms which is much lower than the exhibited difference in load times.
I assume that these numbers represent just CPU time, and do not include any wait time. Is this correct? If so, it's OK that the numbers don't add up and it's possible that the extension doesn't spend all its time doing CPU bound work.
Then there's also the mysterious [Chrome extensions overhead], which doesn't explain the difference in load times either. Judging by the fact that it's a separate line from my extension, I thought they are mutually exclusive, but if I dive deeper into the specifics, I find my extension's functions under the [Chrome extensions overhead] subdomain:
So to summarize, this is what I want to be able to do:
Calculate the total CPU time my extension uses - it seems it's not enough to look under the extension's name, and its functions might also appear in other groups.
Understand whether the delay in load time is caused by CPU processing or by synchronous waiting. If it's the latter, find where my extension is doing a synchronous wait, because I'm pretty sure that I didn't call any blocking APIs.
Update
Eventually I found out that the reason for the slowdown was that we also activated Chrome accessibility whenever our extension was running and that's what caused the drastic slowdown. Without accessibility the extension had a very minor effect. I still wonder though, how I could see in the profiler that my problem was the accessibility. It could have saved me a ton of time... I will try to look at it again later.
I am using seleniun RC for automation tests scripts. I use
selenium.waitForPageToLoad(DEFAULT_TIMEOUT);
but it is not stable and 50% of the time my tests fail because the next element after the wait is not found. For example:
selenium.open("some_url");
selenium.waitForPageToLoad(DEFAULT_TIMEOUT);
selenium.click("id=first");
DEFAULT_TIMEOUT is set to 50000.
Could someone explain how waitForPageToLoad works? What alternative I could use to increase tests stability?
Thanks
Usually you may have such problem with dynamic content (Ajax calls, updates, etc.). It means that page is loaded, but some part is yet to be received from a server.
The best way (as I always do) is to check element presence:
if(selenium.isElementPresent())
selenium.click()
This approach should help.
Or you may use waitForElementPresent(). If it is not available - develop your own as:
while(!selenium.isElementPresent())
thread.sleep(1000)
Right now a large application I'm working on downloads all small images separately and usually on demand. About 1000 images ranging from 20 bytes to 40kbytes. I'm trying to figure out if there will be any client performance improvements by using a ClientBundle for the smaller most used ones.
I'm putting the 'many connections high latency' issue for the side now and just concentrate on javascript/css/browser performance.
Some of the images are used directly within CSS. Are there any performance improvements by "spriting" them vs using as usual?
Some images are created as new Image(url). Is it better to leave them this way, move them into CSS and apply styles dinamically or load from a ClientBundle?
Some actions have a result a setURL on an image. I've seen that the same code can be done with ClientBundle and it will probably set the dataURI for that image. Will doing improve performance or is it faster this way?
I'm specifically talking about runtime more than startup time, since this is an application which sees long usage times and all images will probably be cached in the first 10 minutes, so round-trip is not an issue (for now).
Short answer is not really (for FF, chrome, safari, opera) BUT sometimes for IE (<9)!!!
Lets look at what client bundle does
Client bundle packages every image into one ...bundle... so that all you need is one http connection to get all of them... and it requires only one freshness lookup the next time you load your application. (rather than n times, n being the number of your tiny images.. really wasteful.)
So its clear that client bundle greatly improves your apps load time.
Runtime Performance
There maybe times when one particular image fails to get downloaded or gets lost over the internet. If you make 1000 connections, the probability of something going wrong increases (however little). FF, Chrome, Safari, Opera simply put the image not found logo and move on with the running. IE <9 however, will keep trying to get those particular images, using up one connection of the two its allowed. That really impacts performance in IE.
Other than that, there will be some performance improvement if you keep loading new widgets asynchronously and they end up downloading images at a later stage.
Jai
I am having around 300 Watin tests and I run them in IE using Gallio test runner. These tests take around three and half hours to run completely. I was wondering if everyone here sees the same kind of performance with Watin or I'm doing something terribly wrong. In this regard I would like to know if
You are using any specific browser/test runner that makes watin tests run fast
You are following any specific design pattern that enables running watin tests in parallel
You are following any design pattern that allows me to run multiple tests in the same browser instance so that I do not have to close and open the browser after every test
I don't know about running them in parallel, but you can certainly re-use the same browser instance, you just need a static reference to it. I'm using MSpec so the code is a bit different, but if you just have a static class containing the browser reference or similar, that should sort it.
The author also wrote a blog about it, but this method is much more complex than anything I've had to do:
http://watinandmore.blogspot.com/2009/03/reusing-ie-instance-in-vs-test.html
Another think to check is that you're not 'typing' text unless you need to. For example this:
browser.TextField(Find.ByName("q")).TypeText("WatiN");
Takes much longer than this:
browser.TextField(Find.ByName("q")).Value = "WatiN";
Because in the first line, it types each character individually. You may need to do this to check your JavaScript, but often you don't.