We designed some tests in Microsoft Test Manager for our first sprint (we are using agile) and executed them. Now we are on our second sprint and we need run these tests again as regression. Is there way we can reset the statuses of the tests without destroying the test execution history of the previous sprint?
There isn't a way. We ended up cloning the test cases into a different plan each time.
You can reset your test back to the active state:
This way you know you have to run this tests again in current sprint.
I would only clone test cases if you are going to change them in the next sprint (e.g. add additional steps).
As long as they remain unchanged there is no reason to clone them, from my point of view.
Create a new test suite and add your cases to it. Each suite keeps the final status of each case within the suite - you can see this in the Test tab. The overall status of the case (that is, outside of any suite) is whatever the last run was.
I agree that you don't want to clone the cases unless you are changing the actual steps of the cases between sprints.
Related
I am not sure what is the best practice for running unit tests.
I suppose every developers should pass the unit tests locally before pushing the code to the GIT repo. And then the CI server (Jenkins) would pick up the new changes and run the tests again.
Why do we want to do it twice? Should we?
If the unit tests take a lot of time to run, do we expect the developer only picks the tests related to the change or runs every tests (even outside the scope of his projects), assuming we have a big maven multi-module POM.
Also consider we usually have a powerful hardware for CI server and relatively less powerful workstation for developers.
If the unit tests take a lot of time to run, do we expect the
developer only picks the tests related to the change or runs every
tests (even outside the scope of his projects), assuming we have a big
maven multi-module POM.
As a developer changes a class, modifies the database structure or makes any change that could have side effects, he/she will not/cannot know all potential side effects on the whole application.
And he/she never has to try to be too clever by saying : "I know that it may have be broken with my changes, so I will run just this test".
Unit testing ensures a some degree of code quality : don't make it less helpful
The unit tests are also non-regression tests. To not play all non regression tests before commit and push is taking the risk to introduce flawed code in the source content management.
You will never do it.
Unit tests have to be fast executed
If units test are so long to be executed as it hurts the developer velocity , it generally means that they are bad designed or maybe that they are not real unit tests. A unit test is designed to be run fast.
If you write tests that are long to be run because they require to start a server, to load/clean data in a database, to load/unload some containers, and so for... it means you didn't write unit test but integration tests. These tests are not designed to be executed regularly and automatically on the local development machine but on a CI tool.
The CI has to run all tests
Do we need to run tests in CI server if every developers run the tests
before push?
As explained, integration tests have to be executed by the CI tool.
About unit testing, sparing their execution in the CI side is not a good idea either.
Of course developers have to run the tests before pushing to the SCM but actually you don't have the guarantee that it will always be done.
Besides, even in a perfect world where developers execute all tests before pushing, you could fall into cases where the tests are successful on the developer machine but fail on the CI or the other developer machines.
For example, the developer could introduce in the base code some absolute paths specific to its machine or he/she could also forget to replicate a modification on the database used in the CI environment.
So running all tests (unit and integration tests) has not to be an option for the CI.
Yes, they shall be run twice. Because if some developers don't, then they run never. Developers should run the tests locally to make sure that their code works correctly.
Your CI system is however your reference, so there's no chance of one person arguing that it "works on my machine", but fails for others. Looking forward to continuous delivery, knowing this state on the CI/CD system becomes even more important.
You might hope that always and forever, every commit has been tested successfully locally (and all workstations are the same and identical with production systems...), but hope is a bad strategy.
I am aware of multiple methods one can use to keep certain types of test out of a test playlist: filtering by TestCategory or class name etc.
I am also aware that one can instruct TFS builds to only run certain categories or classes of test.
However, I find it quick and convenient to be able to check my unit test runs just by going into Test Explorer and clicking "run all". It's good practice to do this regularly and prior to check-in to ensure the build is likely to pass.
Is there a straightforward way I can configure my tests to ensure that "run all" just picks up the quick unit tests, and leaves the slower system and regression tests alone? Ideally, I'd like a method that can easily be applied to everyone else working on the code at the same time.
It's a little incomprehensible. Since the MSDN explains very clear.Run all just mean run all tests. And there are also filters or other settings can achieve just run unit tests.
And also as your have mentioned, you can run unit tests with your builds directly. Why you need this function. If you just want to make sure the check-in will not cause build fail. You can use Gated Check-in directly. This also applied to everyone else.
I'm looking for a tool of some kind that i can integrate into our CI process to keep track of the manual steps we have.
As an example, we want to run through some manual test scripts on the integration server before pushing the version to the test server. Currently QA gets a notification when the build is done, executes the manual testing and then tells someone to push the version to test if it's okay.
What i would love to find is something that will keep track of when the manual tests have been successfully completed and automatically push the version to test.
It should will be possible to notify/trigger the tool from Visual Studio Online and have it trigger the next step in VSO as well.
I've been googling various different things, but can't seem to find anything close to what i'm looking for. Todo list tools like Asana doesn't seem to have the integration point we need, but maybe i'm just missing something?
You can use the new Release Management tools in conjunction with test cases to get what you want.
In VSTS you can create test cases to reflect the steps of the tests that you want and then create a Test Plan or Suit to reflect the list of tests that you need to run manually.
Then as part of your release process you could create a custom task that waits for a Test Run to be completed against you list. If that test run has all pass then move to the next step, if any tests fail then fail the release.
This should be fairly easy to setup and you need to call the API to check the Test Run. If your Testers use Microsoft Test Manager you can also have the results associated with the Build that you are deploying and get full traceability.
You can try with the Release Management tool in VSTS. It can achieve partial features you want.
I assume you have three steps in your build definition:
Build solution
Publish to integration server
Publish to test server
You can keep the first two steps in build definition. And then create a release definition in Release Management and add the third steps in it. Configure the definition to "Continuous deployment" and link it to your build definition. Assign your QA to be the approver for this release task. Now, a release task will be created as soon as a build task completes. But it will be pending for the approver (Your QA) to approve. Approve the release task after the test is passed, the build will be published to the test server. Otherwise, reject it.
I have a big project. And I want to receive email when I (and only I) commit code which broke unit test. We have TeamCity project which run all unit tests (it takes more than 5 hours). And I have 2 problems to setup it:
Project has already broken tests (these tests will be fixed but not now). And I want receive email only when I broke new tests. And I don't want receive email when fail old tests.
To check all tests take a lot of time (near 5-6 hours). For this time many developers commit changes. So when team city run unit tests, there are more then 30 changes (and more than 30 different developers can make this changes). And only 1 developer broke test, and I want that only this 1 developer receive email but not all 30.
How can I do that? If someone have some idea or advice, I will be very appreciate.
Thank you very much for help.
I don't know any CI system that gives you specific commit that fails tests (I know one plugin that is in development for Jenkins right now).
There are already notification by email for broken build. I don't think that it should be specific person because the quality is responsibility of all team.
I see issue with project structure:
1) Fix or ignore failed tests
2) Separate unit tests from integration tests. Run them separately
3) Break project into submodules with corresponded tests. This will reduce number of comitters as well reduce time of feedback loop
we have a typical web application stack. there are 120 selenium (webdriver) tests that are executed against the application. this takes roundabout 1 hour. we execute them as part of our build chain "compile > unit test > integration test > gui tests". the gui tests take up a lot of time and we are wondering how to better structure them. currently they are "happy case and unhappy" case tests. they are quite stable i.e. they won't fail because of programmer errors.
we want to get the build times down and the biggest part are the gui tests. we want to do this based on "customer journeys" i.e. specify (together with the business people) some typical use case and test them (happy path) instead of testing too much.....
how do you guys structure your gui tests? here are some ideas that came to my mind
only execute happy path tests
do a "customer journey test", i.e. do several happy path tests in one ("clicking through the pages")
only take the "top 10" specified by the business (mission critical)
top 10 + "all the rest" as nightly build (one time)
i would appreciate your ideas
thanks
marcel
The nighttime is a perfect time for Selenium tests - you just have to remember to put a "Don't turn me off!" sticky note on your computer :).
Also, there always is Selenium Grid when the nighttime begins to be too short to run all tests. With Grid, you can run your tests on several machines in parallel!
We have several test suites that are applicable to different situations. Before a major release (to test, to pre-live, to production), everything runs. Usually (on a daily or even hourly basis on rush days) only the "The Quickened Normal Path of a User Through the Application" suite runs. And if somebody "fixes" a large bug, then the tests related to that part of application are run.
An hour seems absolutely fine to me.
One suggestion could be to decide which of the tests come under smoke tests, and are required to run every night. That is, tests that show the core functionality of your web application is still intact and working - other more detailed tests can be run at different times (once every few days?).
With that said, ours take around 2 hours - the only problem comes when one test has failed, you fix it, commit it, but then have to wait a very long time to verify it is fixed on the CI server.
TeamCity allows to run builds in parallel on the same machine, so gui tests should not be in build chain along with unit and integration tests. UI tests should have separate database and separate build so they will not waste time of developers or manual testers or any other stakeholders. TeamCity will gather all statistic, will send email on build failures and so on.
Next step is parallelization. As Slanec said you can use Grid (several machines are not required) with Mbunit (c#) or TestNG (java). With the help of Grid you can decrease tests execution time e.g. by 10 times so it will take only 6(!) min to run all your tests.
Also you can combine some of your tests in the bigger ones (but this will lead to increasing time for discover the reasons of failure and make tests difficult to maintain).
After these steps Gui tests can be executed after each source commit and provide fast response on application bugs status.
Great question, great answers.
An extra consideration is that you could prioritize your 120 gui tests: You can run them in an order such that the most important ones or those that are most likely to fail are run first.
This won't help to get the build times down, but it will help to get useful feedback from a build faster.
This prioritization (your top 10) need not be fixed, but can change per release / iteration / completed story / day, etc.
For example, you may want to run the newest gui tests first. Or those that were changed most recently. Or the ones covering most of the code that was most recently changed.
There is no tooling up and running out there immediately supporting this, as far as I know, although there is quite some (academic) research going on in the area of test case prioritization.