How complex should smoke tests be? - continuous-integration

So we've been running a daily build on our current project for a lot of months at this point. The smoke tests that goes along with that daily build isn't very complex, though - we run a few nUnit tests on our main class library (which, admittedly, doesn't offer great code coverage), and we make sure that things compile and build. The application in question is an ASP.NET site which consumes some business objects (which include LINQ-to-SQL).
Are there more complex smoke tests that we should be running, particularly on the ASP.NET sites? How would we develop a smoke test for an ASP.NET site, for that matter?

As well as unit testing, it can be good to launch the site to a staging server with some example data. As close to live-like as possible. Then use a HTTP traffic generating script to simulate user traffic and sessions. You can monitor debug logging, exceptions and other testing code on the back-end. You can also take performance measurements here.
Much like a more intense, iterative version of playing with it in the browser yourself.
You can do this by defining (or through inspection) your public resources and their inputs. The scripts can then try and cause validation problems, odd permutations of site flow and other things that test the entire context of the site in a live setting.
If testing is not complete ... from unit testing through to "does it play nice with real data and traffic" then you are ultimately going to be running around like a headless chicken fixing bugs later.

Smoke tests, by nature, should be superficial: Does it compile? Deploy? Does the welcome page load? Maybe load a test page which does a query against the database to see that this connection works, too. That's it.

You should not be doing smoke tests. Are you aware of the etymology of that term? A "smoke test", in electronics, is when you turn on the power and see if any smoke comes out.
You should be doing more comprehensive unit tests; enough to give you good code coverage. This is what you should do on every build. You should also try to do a deployment, and run some "installation verification tests".

Related

Order of Operations for System Testing?

I was taking an exam yesterday, and I noticed they asked in which order the following occur (and I'll put the order I deemed it to be here):
Unit Testing (Always write your unit tests first!)
Integration Testing (After you have some code and it works with other code / systems)
Validation Testing (Keep your data in a consistent state and make sure no bad data is input)
User / Acceptance Testing (It's all about the users otherwise why are we building a system in the first place?)
Is this about right?
Personally I think load-testing or database tuning oughta be in there at the end, but it wasn't on the test.
This question doesn't make a whole lot of sense.
For one thing, different people have different definitions of pretty much every kind of testing you have mentioned. For example, in Extreme Programming (XP) Acceptance Tests (while being derived from User Stories) have nothing to do with User Testing, or User Acceptance Testing (UAT). Using the XP definition, Acceptance Testing refers to automated tests that run on a build agent before code makes it anywhere near a user. User Acceptance Testing (UAT) on the other hand, is typically a manual process that happens after a proposed final version has been created and deployed to a UAT environment.
As pointed out in the comments already, Validation Testing is not a common concept with a widely accepted definition. Integration testing also means different things to different people. To some, it is testing that different processes/applications work together (in a UAT environment, for example). For others, it is simply automated tests that involve more that one class i.e. not Unit Tests.
Also, what do you mean by "order"? Do you mean the order in which the tests are written, or the order in which they are run before releasing code to the wild and/or production environment?
In any case, the question is largely irrelevant in the real world because different processes work for different teams. For example, I myself would always write an Acceptance Test before any Unit Tests. Following a test first approach, you always write a Unit Test before modifying a class, yes? So why wouldn't you write an Acceptance Test before modifying the whole system?
If "Acceptance Testing" means anything close to the XP definition of acceptance testing, then I don't think it makes sense for this to come last.
This sounds like the kind of "exam question" that only makes sense in the context of the course that you took before the exam. Without all that information (particularly the definitions of each kind of testing) it is very difficult to provide a useful answer to this question.
Instead of validation testing, System testing is correct word. And Database testing is a part of integration and system testing. Also Load testing will be performed on the phase of system and user acceptance test.

Debugging and Testing a Web Application Efficiently

I have written a web application which I am trying to test but I am finding that some of the things that I am doing are really repetitive and inefficient. For example, I might want to test just the reporting component of the application but in order to access the reporting section, you are required to log-in. I find myself logging in all the time just to test a completely unrelated component. What are some strategies that I can use to bypass these kind of constraints?
Maybe you should Unit test these functionalities. This way you can automate repetitive tasks.
It also helps to improve your code quality by doing so ;)
A link to get you started : http://msdn.microsoft.com/en-us/magazine/dd942838.aspx

MBUnit test matrix optimization-performace problems in automated ui tests

We're currently using MBUnit for both unit testing and UI testing. For UI testing setup cost for test matrix axes are pretty high (login, browser instance, navigate to page etc). In order to avoid setting up these for each test case we are partly relying on AssemblyFixture to manage some of them.
However because it's not possible to filter out certain cases where they are not applicable to certain combination, it's not possible for us to really use such optimization. So currently we are doing some of the setup per test-case, horribly inefficient.
We could put if statements inside test code to check for correct combinations but we don't desire that either. It pollutes test code.
How do you guys do such optimizations? or test matrix management? Is there a better practice, in another testing framework?
Until recently, I've always thought of UI Automation as black box testing where my UI tests drive against a fully stand alone web site or application. As a result, the tests run under the constraint of normal execution and are subject to a host of environment overhead issues.
I've recently adopted the notion of "shallow" and "deep" UI tests where each set of tests run under an optimized configuration to ease environmental differences and speed things up. For example, the login controller is swapped out with a mechanism that avoids OAuth login overhead and is hard coded with fixed usernames. The product catalog skips database lookup and is hard coded with a few fixed items. The ecommerce backend is swapped out to perform speedy operations that accept/reject transactions based on credit card and amount.
Under a "shallow" configuration I can perform "deep" testing against the UI logic. When I switch to a "deep" configuration, it resembles production and I can perform "shallow" testing of fully integrated components such as login, product catalog, search, etc.
A mix of testing strategies is required.
May be the ui-test-automation-best-practices article is helpful for you. It has some examples how to improve performance of automating ui testing by minimizig logins and context changes.

Running test on Production Code/Server

I'm relatively inexperienced when it comes to Unit Testing/Automated Testing, so pardon the question if it doesn't make any sense.
The current code base I'm working on is so tightly coupled that I'll need to refactor most of the code before ever being able to run unit tests on it, so I read some posts and discovered Selenium, which I think is a really cool program.
My client would like specific automated tests to run every ten minutes on our production server to ensure that our site is operational, and that certain features/aspects are running normally.
I've never really thought to run tests against a production server because you're adding additional stress to the site. I always thought you would run all tests against a staging server, and if those work, you can just assume the prouction site is operational as long as the hosting provider doesn't experience an issue.
Any thoughts on your end on testing production code on the actual production server?
Thanks a lot guys!
Maybe it would help if you thought of the selenium scripts as "monitoring" instead of "testing"? I would hope that every major web site out there has some kind of monitoring going on, even if it's just a periodic PING, or loading the homepage every so often. While it is possible to take this way too far, don't be afraid of the concept in general. So what might some of the benefits of this monitoring/testing to you and your client?
Somehow not all the best testing in the world can predict the odd things users will do, either intentionally or by sheer force of numbers (if 1 million monkeys on typewriters can write Hamlet, imagine what a few hundred click happy users can do? Pinging a site can tell you if it's up, but not if a table is corrupted and a report is now failing, all because a user typed in a value with a umlaut in it.
While your site might perform great on the staging servers, maybe it will start to degrade over time. If you are monitoring the performance of those selenium tests, you can stay ahead of slowness complaints. Of course as you mentioned, be sure your monitoring isn't causing problems either! You may have to convince your client that certain test are appropriate to run every X minutes, and others should only be run once a day, at 3am.
If you end up making an emergency change to the live site, you'll be more confident knowing that tests are running to make sure everything is ok.
I have worked on similar production servers from long time. From my experience, i can say is that, Always it is better to test our change changes/patches in Stage environment and just deploy it, in production servers. This is because, both Staging and Production environments are alike, except the volume of data.
If really required, it would be ok, to run few tests on Production servers, once the code/patch is installed. But it is not recommended/good way to run the tests always on the production server.
My suggestion would be to shadow the production database down to a staging/test environment on a nightly basis and run your unit tests there nightly. The approach suggested by the client would be good for making sure that new data introduced into the system didn't cause exceptions within the system, but i do not agree with doing this in production.
Running it in a staging environment would give you the ability to evaluate features as new data flows into the system without using the production environment as a test bed.
[edit] to make sure the site is up, you could write a simple program which pings it every 10 minutes rather than running your whole test suite against it.
What will change in production environment that you would need to run automated tests? I understand that you may need monitoring and alerts to make sure the servers are up and running.
Whatever the choice, whether it be a monitoring or testing type solution, the thing that you should be doing first and foremost for your client is warning them. As you have alluded to, testing in production is almost never a good idea. Once they are aware of the dangers and if there are no other logical choices, carefully construct very minimal tests. Apply them in layers and monitor them religiously to make sure that they aren't causing any problems to the app.
I agree with Peter that this sounds more like monitoring than testing. A small distinction but an important one I think. If the client's requirements relate to Service Level Agreements then their requests do not sound too outlandish.
Also, it may not be safe to assume that if the service provider is not experiencing any issues that the site is functioning properly. What if the site becomes swamped with requests? Or perhaps SQL that runs fine in test starts causing issues (timeouts, blocking etc.) with a larger production database?

How to automate integration testing?

I'd like to know something, I know that to make your test easier you should use mock during unit testing to only test the component you want, without external dependencies. But at some point, you have to bite the bullet and test classes which interact with your database, files, network, etc.
My main question is: what do you do to test these classes?
I don't feel that installing a database on my CI server is a good practice, but do you have other options?
Should I create another server with other CI tools, with all externals dependencies?
Should I run integration test on my CI as often as my unit tests?
Maybe a full-time person should be in charge to test these components manually? (or in charge to create the test environment and configure the interaction between your class and your external dependency, like editing config files of your application)
I'd like to know how do you do in the real world.
I'd like to know how do you do in the
real world ?
In the real world there isn't a simple prescription about what to do, but there is one guiding truth: you want to catch mistake/bugs/test failures as soon as possible after they are introduced. Let that be your guide; everything else is technique.
A couple common techniques:
Tests running in parallel. This is my preference; I like to have two systems, each running their own instance of CruiseControl* (which I'm a committer for), one running the unit tests with fast feedback (< 5 minutes) while another system runs the integration tests constantly. I like this because it minimizes the delay between when a checkin happens and a system test might catch it. The downside that some people don't like is that you can end up with multiple test failures for the same checkin, both a unit test failure and an integration test failure. I don't find this a major downside in practice.
A life-cycle model where system/integration tests run only after unit tests have passed. There are tools like AnthillPro* that are built around this kind of model and the approach is very popular. In their model they take the artifacts that have passed the unit tests, deploy them to a separate staging server, and then run the system/integration tests there.
If you've more questions about this topic I'd recommend the Continuous Integration and Testing Conference (CITCON) and/or the CITCON mailing list.
There are lots of CI and build|process automation tools out there. These are just representatives of their class of tools.
The approach I've seen taken most often is to run unit tests immediately on checkin, and to run more lengthy integration tests at fixed intervals (possibly on a different server; that's really up to your preference). I've also seen integration tests split into "short-running" integration tests and "long-running" integration tests, which are run at different intervals (the "short-running" tests run every hour, for example, and the "long-running" tests run overnight).
The real goal of any automated testing is to get feedback to developers as quickly as is feasible. With that in mind, you should run integration tests as often as you possibly can. If there's a wide variance in the run length of your integration tests, you should run the quicker integration tests more often, and the slower integration tests less often. How often you run any set of tests in going to depend on how long it takes all the tests to run, and how disruptive the test runs will be to shorter-running tests (including unit tests).
I realize this doesn't answer your entire question, but I hope it gives you some ideas about the scheduling part.
Depending on the actual nature of the integration tests I'd recommend using an embedded database engine which is recreated at least once before any run. This enables tests of different commits to work in parallel and provides a well defined starting point for the tests.
Network services - by definition - can also be installed somewhere else.
Always be very careful though, to keep your CI machine separated from any dev or prod environments.
I do not know what kind of platform you're on, but I use Java. Where I work, we create integration tests in JUnit and inject the proper dependencies using a DI container like Spring. They are run against a real data source, both by the developers themselves (normally a small subset) and the CI server.
How often you run the integration tests depends on how long they take to run, in my opinion. Run them as often as you can. Leave the real person out of this, and let him or her run manual system tests in areas that are difficult or too expensive to automate testing for (for instance: spelling, position of different GUI components). Leave the editing of config files to a machine. Where I work, we have system variables (DEV; TEST and so on) set on the computers, and let the app choose a config file based on that.

Resources