How to increase tests stability with Selenium RC - selenium-rc

I am using seleniun RC for automation tests scripts. I use
selenium.waitForPageToLoad(DEFAULT_TIMEOUT);
but it is not stable and 50% of the time my tests fail because the next element after the wait is not found. For example:
selenium.open("some_url");
selenium.waitForPageToLoad(DEFAULT_TIMEOUT);
selenium.click("id=first");
DEFAULT_TIMEOUT is set to 50000.
Could someone explain how waitForPageToLoad works? What alternative I could use to increase tests stability?
Thanks

Usually you may have such problem with dynamic content (Ajax calls, updates, etc.). It means that page is loaded, but some part is yet to be received from a server.
The best way (as I always do) is to check element presence:
if(selenium.isElementPresent())
selenium.click()
This approach should help.
Or you may use waitForElementPresent(). If it is not available - develop your own as:
while(!selenium.isElementPresent())
thread.sleep(1000)

Related

webdriver test execution speed

I run my tests basically on 3 browsers for now (IE9,FF,Chrome) and just made a research about time needed to run them !
My conclusion is that, more or less, a test in FF needs +-5 mins, in Chrome +-4 and in IE +-12
Some tests need more and some other less but IE32 always needs more than double of other browsers
I know it's normal IE is the slowest one but do You think such big difference is normal ?
I use testNg + selenium grid on a remote machine Win7 64 bits.
There isn't a clear question here :( but...
In general you may be able to help your speed in IE by avoiding use of XPath locators - use lookups by id or css selectors or whatever in their place.

What are your top 3 XPages performance tips for new XPages developers?

What 3 things would you tell developers new to XPages to do to help maximize the performance of their XPages apps?
Tim Tripcony had given a bunch of suggestion here.
http://www-10.lotus.com/ldd/xpagesforum.nsf/topicThread.xsp?action=openDocument&documentId=365493C31B0352E3852578FD001435D2#AEFBCF8B111E149B852578FD001E617B
Not sure if this tipp is for beginners, but use any of the LifeCyclePhaseListeners from the OpenNTF Snippets to see what is going on in your datasources during a complete or partial refresh (http://openntf.org/XSnippets.nsf/snippet.xsp?id=a-simple-lifecyclelistener-)
Use the extension Library. Report Bugs ( or what you consider a bug ) at OpenNTF.
Use the SampleDb from the extLib. ou can easily modify the samples to your own need. Even good for testing if the issue you encounter is reproducable in this DB.
Use Firebug ( or a similar tool that comes with the browser of your choice ) If you see an error in the error tab, go and fix it.
Since you're asking for only 3, here are the tips I feel make the biggest difference:
Determine what your users / customers mean by "performance", and set the page persistence option accordingly. If they mean scalability (max concurrent users), keep pages on disk. If they mean speed, keep pages in memory. If they want an ideal mixture of speed and scalability, keep the current page in memory. This latter option really should be the server default (set in the server's xsp.properties file), overridden only as needed per application.
Set value bindings to compute on page load (denoted by a $ in the source XML) wherever possible instead of compute dynamically (denoted by a #). $ bindings are only evaluated once, # bindings are recalculated over and over again, so changing computations that only need to be loaded once per page to $ bindings speed up both initial page load and any events fired against the page once loaded.
Minimize the use of SSJS. Wherever possible, use standard EL instead (e.g. ${database.title} instead of ${javascript:return database.getTitle();}). Every SSJS expression must be parsed into an abstract syntax tree to be evaluated, which is incrementally slower than the standard EL resolver.
There are many other ways to maximize performance, of course, but in my opinion these are the easiest ways to gain noticeable improvement.
1. Use the Script Library instead writing a bulk of code into the Xpage.
2. Use the Theme or separate CSS class for each elements [Relational]
3. Moreover try to control your SSJS code. Because server side request only reduce our system performance.
4. Final point consider this as sub point of 3, Try to get the direct functions from our SSJS, Don't use the while llop and for loop for like document collection, count and other things.
The basics like
Use the immediate flags ( or one of the other flags) on serverside events if possible
Check the Flag which (forgot its name..) generates the css and js as
one big file at runtime therefore minimizing the ammount of
requests.
Choose your scope wisely. Dont put everything in your sessionscope but define when, where and how your are using the data and based on that use the correct scope. This can lead to better memory usage..
And of course the most important one read the mastering xpages book.
Other tips I would add:
When retrieving data use viewentrycollections or the viewnavigstor
Upgrade to 8.5.3
Use default html tags if possible. If you dont need the functionality of a xp:div or xp:panel use a <div> instead so you dont generate an extra uicomponent on the tree.
Define what page persistance mode you need
Depends a lot what you mean by performance. For performance of the app:
Use compute on page load wherever feasible. It significantly improves performance.
In larger XPages particularly, combine code into single controls where possible. E.g. Use a single Computed Field control combining literal strings, EL and SSJS rather than one control for each language. On that point, EL performs better than SSJS, and SSJS on the XPage performs better than SSJS in a Script Library.
Use dataContexts for properties that are calculated more than once on an XPage.
Partial Execution mode is a very strong recommendation, but probably beyond new XPages developers at this point. Java will also perform better than SSJS in a Script Library, but again beyond new developers. XPages controls you've created with the Extensibility Framework should perform better, because they should run fewer lines of Java than multiple controls, but I haven't tested that.
If you mean performance of the developer:
Get the Extension Library.
Use themes to set default properties, e.g. A standard style for all your pagers.
Use Firebug. If you're developing for Notes Client or IE, still use Firebug. You'll spend longer suffering through Client/IE thank you will fixing the few quirks that will remain.

"Watin + Gallio + IE" performs badly?

I am having around 300 Watin tests and I run them in IE using Gallio test runner. These tests take around three and half hours to run completely. I was wondering if everyone here sees the same kind of performance with Watin or I'm doing something terribly wrong. In this regard I would like to know if
You are using any specific browser/test runner that makes watin tests run fast
You are following any specific design pattern that enables running watin tests in parallel
You are following any design pattern that allows me to run multiple tests in the same browser instance so that I do not have to close and open the browser after every test
I don't know about running them in parallel, but you can certainly re-use the same browser instance, you just need a static reference to it. I'm using MSpec so the code is a bit different, but if you just have a static class containing the browser reference or similar, that should sort it.
The author also wrote a blog about it, but this method is much more complex than anything I've had to do:
http://watinandmore.blogspot.com/2009/03/reusing-ie-instance-in-vs-test.html
Another think to check is that you're not 'typing' text unless you need to. For example this:
browser.TextField(Find.ByName("q")).TypeText("WatiN");
Takes much longer than this:
browser.TextField(Find.ByName("q")).Value = "WatiN";
Because in the first line, it types each character individually. You may need to do this to check your JavaScript, but often you don't.

In test driven development, do you write every possible test first, then the code?

In doing test driven development I have been in the habit of writing the first unit test for a new piece of functionality first, then writing the code for that functionality. If I have additional tests to write to cover all scenarios, I usually write them after the code is written. Is this considered bad form? Should I try and write every conceivable test for a piece of functionality first, before ever writing that code?
In order to do TDD properly, you always write the test first, and then the functionality second.
To add to that, I would take one scenario at a time, don't write 20 tests and then write the code for those 20 tests. Write one test, red/green flag it, then move on to your next test. This makes sure you're also doing one of the core tenets of TDD, which is to do the simplest implementation possible that meets all of your requirements/scenarios.
actually no, I often discover functionality "on-the-go". Let me explain the "no" a bit further:
I usually start out writing a test case for a high level feature, defining its Interface. After that, I usually set this test to ignore and continue writing tests for each of the Interfaces functionality. My cycle goes like:
Integration Test for Story A (high level API)
Write Unit Test for method xyz called in Integration Test
Implement method (red/green/refactor)
Repeat 2+3 till Integration Test passes.
While doing so, I often realize I have forgotten some small functionality in my main test. I then usually take time to look back at my customers requirements. If its a fit, I go back and add a test for it, set to ignored as I first want to finish what I started.
Sometimes I see the chance to do a refactoring. I usually finish an implementation till I reach a commit point and do refactoring then, however sometimes I stash my changes, go back and do the refactoring (including new tests if nescessary) first. This workflow is powererd by Mercurial MQ.
For most people, TDD and incremental/agile development go together. This looks something like:
Write a test for some feature
Write just enough code to make the test pass, refactoring as necessary
Repeat.
If you happen to have a detailed specification ahead of time, you could write all of the tests first, but you'd have to live with having sone tests not passing for a while.
The sooner you write the tests, the better. I usually find writing tests being harder tasks than actually implementing the functionality because you have to be aware of all the possible outcomes. So I tend to write more tests when I'm "in the zone". And when during coding I realize I might have missed a test case I just note that down on the to-do lists.
So in my opinion it's up to your leisure but I would implement tests in multiple batches.
The way I see it, test driven development isn't necessarily tests first development. Your tests drive your development and you are really writing your tests as you develop your application. You start by writing a simple test that fails because you haven't written the functionality yet. Then you write your code to implement that so that the tests pass.
Then you go back to your test, make modifications that will force you to add more functionality or refactor your code to follow better practices or reduce duplicate code, go fix your code to make the test pass...repeat, repeat, repeat.
A couple of videos that demonstrates this is below, although you can probably find a lot more by googling "TDD Video"
http://agilesoftwaredevelopment.com/videos/test-driven-development-basic-tutorial
(oops, only one video, new users can't insert more than one link)
I try to write a test at some level before each bit of functionality. Sometimes, I have to write a little more code to get through the compiler, but I try to minimise that. Writing the test first means that I've thought about what the code is supposed to achieve before writing it.
One technique I find useful is to keep an index card or notepad handy, and make a note of all the cases that I think of along the way. That allows me to focus on the current task without losing track of all the other things I'm supposed to think about. Afterwards, I can work through the list and either complete the extra cases or drop them as not necessary.
You could do that, but you wouldn't be doing TDD. The problem (well, one of them, anyway) with writing all of your tests up front is that in any case where the requirements are non-trivial, your tests will be building in a lot of assumptions about the structure of the code you're test-driving. Big steps lead to missteps.
One of the keys of successful TDD involves taking small steps. Small steps mean fewer changes to back out when something goes wrong. Small steps mean you can more often get your head around the effects of the changes you're making. And because small steps are easier to take with confidence, they have the paradoxical effect of increasing your velocity.
The TDD cycle starts with requirements. Start by choosing a requirement you know how to define through tests immediately, in a few short steps. If you look at a requirement and you're not sure how to test it, or you think, "Yeah, but to do that, I'd need to [insert ill-defined steps] first", then you should either skip to another requirement that you know how to do, or you should break this requirement into smaller requirements that you know how to do.
Once you have that, you work in a short red-green-refactor cycle: Write a test that quantifies some part of the requirement ("red", because it fails, because it has no implementation to test yet), write any code that will pass the test ("green"), then rework the code to remove duplication, magic numbers, and other code smells ("refactor"). During the refactoring phase, you should continue working in small steps, frequently re-running the test to make sure you haven't broken anything. Continue this cycle until you can look your boss/client in the eye and call the requirement met.
Now that you have one simple piece of your system defined, you've opened up the list of requirements available to implement - requirements that are adjacent to or dependent on the one you just implemented can now be tested and implemented in smaller steps building on what you've already done.
So the upshot of all that is: Don't try to do all your tests at once. One (small) thing at a time.
The point of TDD is that you have to observe that test fails when feature is not yet implemented. So you have to write test before code.
When you get into the TDD rhythm you write one test at a time and make it work. Very short red-green-refactor cycles really feel the rhythm. That being said, there is nothing wrong with other approaches (and they may even make more sense for some types of problems) but typically the only thing you need to do about other tests you are thinking of is write them down (or have your pair if you are pair programming write them down) so you don't forget them. You have to do that anyway because you could forget about a test in the middle of writing a different test.
Do just enough tests to test 1 unit of code at a time.. then write the actual code until it passes the test.. rinse, wash, repeat until done.
If you find yourself needing to write many tests for one unit of code ( a method, a function etc) it might be a sign that you are trying to do too much in that unit... which in turn makes the unit dificult to test & to refactor at a later time.

TDD. When you can move on?

When doing TDD, how to tell "that's enough tests for this class / feature"?
I.e. when could you tell that you completed testing all edge cases?
With Test Driven Development, you’ll write a test before you write the code it tests. Once you’re written the code and the test passes, then it’s time to write another test. If you follow TDD correctly, you’ve written enough tests once you’re code does all that is required.
As for edge cases, let's take an example such as validating a parameter in a method. Before you add the parameter to you code, you create tests which verify the code will handle each case correctly. Then you can add the parameter and associated logic, and ensure the tests pass. If you think up more edge cases, then more tests can be added.
By taking it one step at a time, you won't have to worry about edge cases when you've finished writing your code, because you'll have already written the tests for them all. Of course, there's always human error, and you may miss something... When that situation occurs, it's time to add another test and then fix the code.
Kent Beck's advice is to write tests until fear turns into boredom. That is, until you're no longer afraid that anything will break, assuming you start with an appropriate level of fear.
On some level, it's a gut feeling of
"Am I confident that the tests will catch all the problems I can think of
now?"
On another level, you've already got a set of user or system requirements that must be met, so you could stop there.
While I do use code coverage to tell me if I didn't follow my TDD process and to find code that can be removed, I would not count code coverage as a useful way to know when to stop. Your code coverage could be 100%, but if you forgot to include a requirement, well, then you're not really done, are you.
Perhaps a misconception about TDD is that you have to know everything up front to test. This is misguided because the tests that result from the TDD process are like a breadcrumb trail. You know what has been tested in the past, and can guide you to an extent, but it won't tell you what to do next.
I think TDD could be thought of as an evolutionary process. That is, you start with your initial design and it's set of tests. As your code gets battered in production, you add more tests, and code that makes those tests pass. Each time you add a test here, and a test there, you're also doing TDD, and it doesn't cost all that much. You didn't know those cases existed when you wrote your first set of tests, but you gained the knowledge now, and can check for those problems at the touch of a button. This is the great power of TDD, and one reason why I advocate for it so much.
Well, when you can't think of any more failure cases that doesn't work as intended.
Part of TDD is to keep a list of things you want to implement, and problems with your current implementation... so when that list runs out, you are essentially done....
And remember, you can always go back and add tests when you discover bugs or new issues with the implementation.
that common sense, there no perfect answer. TDD goal is to remove fear, if you feel confident you tested it well enough go on...
Just don't forget that if you find a bug later on, write a test first to reproduce the bug, then correct it, so you will prevent future change to break it again!
Some people complain when they don't have X percent of coverage.... some test are useless, and 100% coverage does not mean you test everything that can make your code break, only the fact it wont break for the way you used it!
A test is a way of precisely describing something you want. Adding a test expands the scope of what you want, or adds details of what you want.
If you can't think of anything more that you want, or any refinements to what you want, then move on to something else. You can always come back later.
Tests in TDD are about covering the specification, in fact they can be a substitute for a specification. In TDD, tests are not about covering the code. They ensure the code covers the specification, because the code will fail a test if it doesn't cover the specification. Any extra code you have doesn't matter.
So you have enough tests when the tests look like they describe all the expectations that you or the stakeholders have.
maybe i missed something somewhere in the Agile/XP world, but my understanding of the process was that the developer and the customer specify the tests as part of the Feature. This allows the test cases to substitute for more formal requirements documentation, helps identify the use-cases for the feature, etc. So you're done testing and coding when all of these tests pass...plus any more edge cases that you think of along the way
Alberto Savoia says that "if all your tests pass, chances are that your test are not good enough". I think that it is a good way to think about tests: ask if you are doing edge cases, pass some unexpected parameter and so on. A good way to improve the quality of your tests is work with a pair - specially a tester - and get help about more test cases. Pair with testers is good because they have a different point of view.
Of course, you could use some tool to do mutation tests and get more confidence from your tests. I have used Jester and it improve both my tests and the way that I wrote them. Consider to use something like it.
Kind Regards
Theoretically you should cover all possible input combinations and test that the output is correct but sometimes it's just not worth it.
Many of the other comments have hit the nail on the head. Do you feel confident about the code you have written given your test coverage? As your code evolves do your tests still adequately cover it? Do your tests capture the intended behaviour and functionality for the component under test?
There must be a happy medium. As you add more and more test cases your tests may become brittle as what is considered an edge case continuously changes. Following many of the earlier suggestions it can be very helpful to get everything you can think of up front and then adding new tests as the software grows. This kind of organic grow can help your tests grow without all the effort up front.
I am not going to lie but I often get lazy when going back to write additional tests. I might miss that property that contains 0 code or the default constructor that I do not care about. Sometimes not being completely anal about the process can save you time n areas that are less then critical (the 100% code coverage myth).
You have to remember that the end goal is to get a top notch product out the door and not kill yourself testing. If you have that gut feeling like you are missing something then chances are you are have and that you need to add more tests.
Good luck and happy coding.
You could always use a test coverage tool like EMMA (http://emma.sourceforge.net/) or its Eclipse plugin EclEmma (http://www.eclemma.org/) or the like. Some developers believe that 100% test coverage is a worthy goal; others disagree.
Just try to come up with every way within reason that you could cause something to fail. Null values, values out of range, etc. Once you can't easily come up with anything, just continue on to something else.
If down the road you ever find a new bug or come up with a way, add the test.
It is not about code coverage. That is a dangerous metric, because code is "covered" long before it is "tested well".

Resources