How to be successful in web user interface testing? - user-interface

We are setting up a Selenium test campaign on a big web application.
The first thing we've done was to build a framework which initialize SQL data in database before the test, launch the test, archive results and then clear data.
We've integrate that in a Maven 2 process, run every day by TeamCity on a dedicated database.
We've set up several Selenium tests now but It's not as used as planned.
Reasons are that tests are sometimes broken for other reasons than regressions (data may have changed, stored procedure may have been recompiled and so on).
I would like to know if there are big success in user interface testing and if so, reasons to that. Commons errors may also interest me.

Testability helps a lot. The biggest win for testability in web apps is if all of the HTML elements you need to interact with on the page have unique and consistent attributes. If the attributes you are using to identify the HTML elements (Selenium uses xpath) are not consistent/reliable from build-to-build, or session-to-session, your test scripts will fail. Also, these attributes must be unique, so that the automation tool (in this case Selenium) can reliably find the object on the web page.

If you want reliable unit tests, you need to have the same input. Starting state of the database is the input. So, you need to have the same starting database each time. Of course, if you wish to do testing with different input, you need to create another unit test (as results will obviously not be the same).
When I do stuff like this, I always use the same database as a starting point. Of course, some of the tests might fail without modifying the database is correct way, so some other subsequent tests might fail as well even though they wouldn't otherwise. If your unit-test tool allows, you should define dependencies between tests to make sure that those tests will not be run at all when the 'parent' one fails.

I use http-unit which has the added benefit of working before any styling has been added to the page.
http://httpunit.sourceforge.net/
You can attach the tests to run in the integration test phase for maven2.
From the site
Written in Java, HttpUnit emulates the
relevant portions of browser behavior,
including form submission, JavaScript,
basic http authentication, cookies and
automatic page redirection, and allows
Java test code to examine returned
pages either as text, an XML DOM, or
containers of forms, tables, and
links.

Related

How to create Performance testing framework in jmeter?

For functional automation we use to create a framework which is reusable for automating application. Is there any way to create Performance testing framework in jmeter. So that we can use same framework for Performance testing of different applications.
Please help if any one knows and provide more information regarding it.
You can consider JMeter as a "framework" which already comes with test elements to build requests via different protocols/transports, applying assertions, generating reports, etc.
It is highly unlikely you will be able to re-use existing script for another application as JMeter acts on protocol level therefore there will be different requests for different applications.
There is a mechanism in JMeter allowing to re-use pieces of test plan as modules so you won't have to duplicate your code, check out Test Fragments and Module Controller, however it is more applicable for a single application.
The only "framework-like" approach I can think of is adding your JMeter tests into continuous integration process so you will have a build step which will execute performance tests and publish reports, basically you will be able to re-use the same test setup and reporting routine and the only thing which will change from application to application will be .jmx test script(s). See JMeter Maven Plugin and/or JMeter Ant Task for more details.
You must first ask yourself, how dynamic is my conversation that I am attempting to replicate. If you have a very stable services API where the exposed external interface is static, but the code to handle it on the back end is changing, then you have a good shot at building something which has a long life.
But, if you are like the majority of web sites in the universe then you are dealing with developers who are always changing something, adding a resource, adding of deleting form values (hidden or not), headers, etc.... In this case you should consider that your scripts are perishable, with a limited life, and you will need to rebuild them at some point.
Having noted the limited lifetime of a piece of code to test a piece of code with a limited lifetime, are there some techniques you can use to insulate yourself? Yes. Rule of thumb is the higher up the stack you go to build your test scripts the more insulated you are from changes under the covers ( assuming the layer you build to is stable ). The trade off is with more of the intelligence under the covers of your test interface, the higher the resource cost for any individual virtual user which then dictates more hosts for test execution and more skew from client side code which can distort the view of what is coming from the server. An example, run a selenium script instead of a base jmeter script. A browser is invoked, you have the benefit of all of the local javascript processing to handle the dynamic changes and your script has a longer life.

Why is caching usually disabled in test environments?

On our applications we have a lot of functional tests through selenium.
We understand that it is a good practice to have the server where the tests are ran as similar as possible to the production servers, and we try to follow it as much as possible.
But that is very hard to achieve in 100%, so we have a different settings file for our server for some changes that we want in the staging environment (for example, we opt to turn e-mail sending off because of the additional required architecture).
In fact, lots of server frameworks recommend having an isolated front-controller (environment) for testing to easily achieve this small changes.
By default, most frameworks such as ours recommend that their testing environment should have its cache turned off. WHY?
If we want to emulate production as much as possible, what's the possible advantage of having the server's cache turned off when performing functional tests? There can be bugs that are only found with the cache on, and having it on might also have the benefit of accelerating our tests execution!
Don't we just need to make sure that the cache is cleared before starting a new batch of functional tests, the same way we clear the cache when deploying a new version to production?
A colleague of mine suggests that the reason for this is could be that cache can generate false-positives, errors that are not caused by badly implemented features (that are the main target of those tests), but of the cache system itself... but even if those really happen (I suppose it depends on how advanced is the way the cache is used), why would they be false-positives?
To best answer this question I will clarify some points.
(be aware that this is based on my experience)
Integration tests using the browser are typically "Black Blox Tests" , which means that they are made ​​without knowledge of the code. That is, without knowing whether the cache is being used or not.
These tests are usually designed based on certain tasks that are performed during the normal use of the system. But, these tasks are chosen for automation depending on certain conditions of use (mainly reusability, and criticality/importance but also the cost of implementation). So most of the times, we will not need/wont to test caching behaviour.
By convention, a test (any) must be created with a single purpose and have the less possible dependencies. Why?
When the test fails , we can quickly find the source of the failure.
Smaller tests are easier to extend, fix, remove...
We do not spend too much time, first debugging the test code and then
debugging the system code.
Integration testing should follow this convention.
Answering the question:
If we want to check a particular task, we must isolate it as possible.
For example, if we want to verify that the user correctly logs in, we have to delete the cookies to be sure that they do not influence the result (because they may). If on the other hand, we want to test the cookies we have to somehow use an environment where they are not deleted.
so, in short:
If there is need to test the caching behaviour then we need to create an "isolated" environment where this is possible.
The usual integration tests purpose is to test the functionality, so the framework default value it's to have the cache disabled.
This does not means that we shouldn't create our own environment to test the caching behaviour.

Testing a lot of external APIs in Rails

I'm developing a Rails app with a lot of dependencies on external APIs, for example Delicious.
All APIs share two workflows:
On the first call they are going to load all data since the beginning of time.
All following calls will load data filtered by the last execution time (if supported).
Testing them in real means I must create a test account for each API or at least use my private one. Even with VCR, because they would be called once. And my biggest problem: I would have to mess around a lot with Date's and Time's to emulate the two different workflows mentioned above. Though Timecop makes it really easy, it feels like a pain in the ass.
Another approach is to fake the API calls and their corresponding responses completely, but this means no real tests and furthermore I would never realize changes or problems with the APIs.
Any suggestions? Maybe a good combination of both ways?
Do both.
Start by mocking/stubbing everything in your regular run-frequently test suite. Do this for all the fine-grained model/controller testing.
Then add end-to-end testing (eg in integration tests) that cover usual workflow-scenarios that hit the real (test) servers.
Alternatively use a different test suite for the end-to-end testing eg cucumber instead of Test::Unit, or selenium/Watir whatever as long as it's different to your usual test suite

How would you stress test a dynamic site, when you don't know what the URLs will be ahead of time?

This isn't a question of what stress testing tools are out there. I'm afraid it's a lot harder than that. (At least for me)
Consider a restful architecture for a forum or blog that generates random IDs for each post.
Simulating creating those topics/articles would be simple, because you'd just be posting form data to an endpoint like: /article, or /topic
But how do you then stress test commenting on those articles/topics? This is different, because the comments need to belong to an article/topic, which means that you need the ids of those items. However, if all you can do is issue posts, and you have no way of pulling those ids, you'd be unable to create them.
I'm creating a site that is similar in this regard, and I have no idea how to stress test the creation of the comments.
I have two ideas, and they're both pretty awful:
Generate a massive system ahead of time with some kind of factory, and then freeze it. From there, I figure I'd have to use some kind of browser automation to create my 'comments' on all of this. The automation would I suppose go through a recording proxy, like what JMeter offers. Then, to run the test, I reload the database, and replay the massive log file.
Use browser automation for the whole thing, taking advantage of the dynamic links delivered in the HTML page. The only option here would be Selenium, and really, we're talking a massive selenium grid that would be extremely expensive. Probably very difficult to maintain also.
Option 2 is completely infeasible near as I can tell, but option 1 sounds excruciating. I'm really hoping someone can suggest something more clever.
Option 1.
I mean, implementation notes aside, you're basically just asking for a testing environment. So, the answer is to make one. In whatever fashion:
Generate it
Make it once and reload it
Randomise it
Whatever. It's the approach to go with.
How do you your testing is kind of a side issue (unit testing/browser/whatever, up to you).
But you've reached a point where you need to test with real data. So make it happen.
This is a common problem. We handle it by extracting the dynamic parts of the URLs from the server responses. I presume this system uses web browser client - which implies that those URLs are being sent in the server responses. If they are in the responses, then you CAN get them. However, since you said "if all you can do is issue posts, and you have no way of pulling those ids", then perhaps this is not the case? In that case, can you clarify?
We've recently been doing a lot of testing of Drupal systems for our customers - which has exactly the problem you've described. We either solve it by extracting the IDs dynamically from the page as the user browses to the page they want to comment on, or we use option 1, or a combination of both. Note that if you have a load testing tool handy, then generation of content is not too difficult - use the tool to do it. I.e. run a "content generation" load test. Besides yielding useful data on its own accord, that will give you a test database that you can then backup/restore as needed to maintain your test infrastructure. Now you can run the test on a more realistic environment - one that has lots of content already in it (assuming, of course, that this is realistic for your purposes).
If you are interested, I'd be happy to demo how we solve the problem using our software (Web Performance Load Tester).
I have used Visual Studio to solve this kind of problem. Visual Studio allows C# coded web tests that can programatically parse the html returned and take action based on that.
I was load testing a SharePoint website and required information to be populated ahead of time. I did create a load test that was specifically for creating "random" pages of content ahead of time. I populated a test harness database with the urls ahead of time, allowing some control over the pages that were loaded.
With a list of "articles" available and a list of potential comments, it is possible to code a pseudo-random number generator (inside a stored procedure because of the asynchronous nature of the test harness) to get a repeatable load test. That meant that the site would be populated in the same way each time the load test was run.
It does take some effort to create a decent way of populating the site off the bat, but the return in the relevance of the load test is quite good.

Database integration tests

When you are doing integration tests with either just your data access layer or the majority of the application stack. What is the best way prevent multiple tests from clashing with each other if they are run on the same database?
Transactions.
What the ruby on rails unit test framework does is this:
Load all fixture data.
For each test:
BEGIN TRANSACTION
# Yield control to user code
ROLLBACK TRANSACTION
End for each
This means that
Any changes your test makes to the database won't affect other threads while it's in-progress
The next test's data isn't polluted by prior tests
This is about a zillion times faster than manually reloading data for each test.
I for one think this is pretty cool
For simple database applications I find using SQLite invaluable. It allows you to have a unique and standalone database for each test.
However it does only work if you're using simple generic SQL functionality or can easily hide the slight differences between SQLite and your production database system behind a class, but I've always found that to be fairly easy in the SQL applications I've developed.
Just to add to Free Wildebeest's answer I have also used HSQLDB to do a similar type testing where each test gets a clean instance of the DB.
I wanted to accept both Free Wildebeest's and Orion Edwards' answers but it would not let me. The reason I wanted to do this is that I'd come to the conclusion that these were the two main ways to do it, but which one to chose depends on the individual case (mostly the size of the database).
Also run the tests at different times, so that they do not impact the performance or validity of each other.
While not as clever as the Rails unit test framework in one of the other answers here, creating distinct data per test or group of tests is another way of doing it. The level of tediousness with this solution depends on the number of test cases you have and how dependant they are on one another. The tediousness will hold true if you have one database per test or group of dependant tests.
When running the test suite, you load the data at the start, run the test suite, unload/compare results making sure the actual result meets the expected result. If not, do the cycle again. Load, run suite, unload/compare.

Resources