QUnit asynchronizity and browser unresponsiveness - qunit

I'm wondering how it is possible that a QUnit test suite, with lots of test cases, can ever make the browser window become unresponsive. Shouldn't the asynchronous execution of the tests ensure that the browser frequently is left room to breathe?

Related

Long-running tests

We get a software package (involving web pages) from a standard provider. Not sure I can mention the name but it is a big one and keeps track of tickets. Anyway, we do some customization on what we receive. By "We" I mean the company, not me or my group. My group only can SQA the pages with Chrome, Java, Eclipse (with debugger) and Selenium.
Anyway, these are big tests (originally written by others who are no longer in the company). They may run 1 1/2 or 2 hours. Every so often we get a big software update with a lot of changes (xpaths change, or //a changes to //button, or Ids change or lots of other things).
So I can be running a test and it may run for 70 minutes, be about half way done, and then choke on a changed xpath. To debug, I have to put a breakpoint there, and then run the test from the beginning and wait 70 minutes. Then if I find a fix, I have to make the fix, terminate the program, run it again for 70 minutes, hope it works, and then wait for the next error, which will take longer to recreate.
There must be other is this situation who have some suggestions on how to debug long-running tests which take a long time before they give error?
I do see the Eclipse debugger can break on exceptions, caught or uncaught. But since this is testng, aren't all exceptions eventually caught (even by test ng)? There are a lot of caught exceptions (like sometimes if an element does not go stale that is OK so the exception is caught and ignored. If an xpath is not visible sometimes that is OK). So I don't want to break on caught exceptions.
Has anyone else been in this situation and can offer some suggestions?

Firefox page sometimes loads forever, how to determine cause?

Pages on my website sometimes load indefinitely (shows the circular blue loading animation in the tab, and "Waiting for example.com..." in the status at the bottom) in Firefox. Usually they finish loading very quickly.
Since I can't predict when it will have one of these episodes, and Firefox requires you to refresh the page in order to use the Network tool, how can I determine the cause of this on the rare occasions I see it?
I'm not sure if it is because JavaScript is running or a request for another file hasn't been answered.
This may happen in other browsers, but I generally only use Firefox unless the QA person I work with tells me there is a browser specific issue.
Generally, use the httpfox add-on, Firefox developer tools, and Firebug. Understand your server software and make sure errors are logged. Read the log.
If you can repro the error then you should be able to use a debugger with your server and find the section of code that is processing forever. Maybe there's an infinite loop. Maybe a lock has blocked execution. Maybe a ridiculous query was made to the database.
Otherwise, unless this unpredictable error is causing you to lose money or endanger people, ignore it.

AutoIt testing of Eclipse RCP

I'm struggling with AutoIt regression tests. AutoIt was chosen because requirement is 100% blackbox GUI testing (no intervention to tested project). This solution has nevertheless got some issues, with which I need help.
Changing tabs in application - Because GUI class isn't SysTabControl32, but SWT_Window, example code from GUITab.au3 UDF doesn't work. Current solution is focusing on component and then navigating through tabs with arrow keys. This has a bad influence on test performance (and I don't even want to imagine a future possibility of some tab being disabled).
Timeouts - When tabs are changing, tests have to delay before they can proceed. The shorter the delay, the bigger the probability of test failure (app wasn't ready for test to continue). This leads to big delays before actions.
Instance numbers - Identifying instances of controls is a problem. When I wrote some test button OK it had instance number 9. When some buttons were added to the form I had to rewrite the test, because instance number of OK button changed because of this.
These three are most important.
Changing technology of testing would be hard because of big amount of already written tests. But I would like to write new test in a better way. Sikuli has problems getting text from screen and SWTBot has dependencies in tested projects.
Our tests run for 20 hours and when GUI layout changes I need to edit almost every test (instance number problem). Can anybody suggest a solution or workaround for ultra reliable blackbox testing?

VS2010 Coded UI Test vs Web Performance Tests

I know this question looks a lot like this one, but I don't have enough rep points to comment seeking further clarification VS2010 Coded UI Tests vs. Web Performance test (Whats the difference??)
Tom E. gives a great explanation, but I'm still a bit vague on one point. I see why Coded UI tests cannot be replaced by Web Performance tests (the extra resources needed for a browser interface) but why can Web Performance tests not replace Coded UI tests?
If you record a webperf test and generate the code from that, couldn't you add validation and extraction rules (inspecting the DOM) to achieve the same result as a Coded UI test without the overhead of the browser?
I realize that this wouldn't be exactly the same as testing in different browsers, but is there a reason this wouldn't at least test whether you're receiving the appropriate response from the server?
Thanks!
Dave, good point. I think you would see the difference fairly quickly if you were trying to build an automated functional test suite (think 500 tests or more) with VS web performance tests and having to parse the DOM for querying and interacting with the application. You would essentially be writing your own Coded UI test playback mechanism. You could do it without the Coded UI test functionality, but it would be quite painful. The level of pain would be dependent on how many test cases you need to automate and how many screens there are in your app and how complex the interactions are.

Best practices in testing a website

In our QA team, we run a suite of automated tests on every commit the developers do. Since there are quite a few such commits daily and no developer wants to wait more than a few minutes to get feedback we're limited to 5 minutes of testing. In these 5 minutes we want to run as many tests as possible.
We've found that selenium tests are the best for our needs: mostly because they're reliable. If a selenium test reports a JS error you're 95% sure it's a real error. (This is an extremely important property as we've learned from our experience using HTMLUnit). However, running selenium tests is slow and heavy. (We maintain a small cpu farm so we can run many selenium servers and many scripts asynchronously).
Recently we've proposed a new approach - use selenium only for the places where you REALLY need it: popups, ajax, JS in general,.. In other places use a "Textual Browser". For example, if you want to check that the following link "works":
<a href='/somewhere'> link </a>
You don't really need selenium. You can perform a GET request to the page and then use regex/parse the page and use xpath/.. Bottom line - you don't need a JS engine. Clearly this is a much lighter and faster test.
We've had much success with this approach and we ran into the following links:
<a href='/somewhere-1' onclick="foo()" > link 1 </a>
<a href='/somewhere-2' onclick="foo()" > link 2 </a>
... many more such links ...
So in this case, you don't really have to run a selenium script that pressed each and every link. Just click on one of them using selenium (so you test the functionality of the JS function foo()) and then use the textual browser to verify the hrefs of the other links.
My question is where do you think we should draw the line? and I'd be happy to hear your opinions - are there "Textual Browser" tools out there (we haven't worked with WebDriver)?
It sounds to me like the line between what you are doing and what I would expect your developers to be doing is a little blurred.
In my mind the developers should be doing unit tests of thier foo() function. TDD in Javascript is a bit low on the tool support, but something that should happen.
If the functions are being unit tests, then selenium becomes the place to test more against the user requirements rather than at the unit of code level.
That said, interactions between QA and Dev teams are pretty complicated socially and it may be hard to get agreement to follow this approach.

Resources