Can't order test case in Requirement based Test Suite - Test manager 2010 - testcase

In Test Manager 2010 It seems that I can't order test cases in a Requirement based Test Suite (ordering a test cases in normal test suite works fine)
Please anyone explain me why they forbid me to do that, or suggest a way to work around
Thanks a lot

How to order / reorganize order of testcases?
Ordnen von Testcases / Testfällen
Ändern der Reihenfolge von Testcases / Testfällen
Workaround
create 2 suites, one of them query based
customize the query of the query based suite and run it
select all needed testcases and copy them to the clipboard
insert them in the manual suite order them by
use "order" from the menue-bar by changing the values in criteria order
delete the query based suite
have fun
http://msdn.microsoft.com/de-de/library/dd997699.aspx

Just ran into this myself. It turns out that requirement-based test suites are a special case of a query-based suite, which also cannot be ordered. All test cases linked to a requirement / user story will show in the corresponding test suite.
As for work-arounds to the default sort order (apparently, the test case's work item ID), I couldn't find any. However, you could approximate the requirement-based query with a regular query, and then use the query-based suite's sort order capability. There are two problems with this: (1) the requirements that a test case is associated with are not available in the query editor, so you'd have to use some other criteria; and (2) you still don't have ad-hoc test case ordering control like you would with a non-query test suite.
I didn't find anything about adding any ordering capability to requirement-based test suites in an upcoming version, or in any third-party tool. Might be something to look into creating.
refs:
http://blogs.msdn.com/b/vstsqualitytools/archive/2009/06/04/no-more-missed-requirements.aspx
http://msdn.microsoft.com/en-us/library/dd286578.aspx

Related

Sphinx docs including unit test output

This is a how-to/best-practice question.
I have a code base with a suite of unit tests run with pytest
I have a set of *.rst files which provide explanation of each test, along with a table of results and images of some mathematical plots
Each time the pytest suite runs, it dynamically updates the *.rst files with the results of the latest test data, updating numerical values, time-stamping the tests, etc
I would like to integrate this with the project docs. I could
Build these rst files separately with sphinx-build whenever I want to view the test results [this seems bad, since it's labor intensive and not automated]
tell Sphinx to render these pages separately and include them in the project docs [better, but I'm not sure how to configure this]
have a separate set of sphinx docs for the test results which I can build after each run of the test suite
Which approach (or another approach) is most effective? Is there a best practice for doing this type of thing?
Maybe take a look into Sphinx-Test-Reports, which reads in all information from junit-based xml-files (pytest supports this) and generates the output during the normal sphinx build phase.
So you are free to add custom information around the test results.
Example from webpage:
.. test-report:: My Report
:id: REPORT
:file: ../tests/data/pytest_sphinx_data_short.xml
So complete answer to your question: Take none of the given approaches and let a sphinx-extension do it during build-time.

How to handle multi language website in jmeter script

I have website which supports English and French.I have already created script for website in English but now they want me to test against french website.So how can i extended my script that asssertions does not fail i test script any of those languages.
You can easily add flexibility to your Assertions so they would check English OR French word presence in the response.
For instance if you want to use a single assertion to check whether there is Welcome OR Bienvenue word in the response you can combine them using pipe as follows:
Welcome|Bienvenue
As per How to Use JMeter Assertions in 3 Easy Steps guide Response Assertion in "Contains" and "Matches" mode accepts Perl5-style regular expressions so you should have enough flexibility to be able to check both English and French website versions.
In short
Your tests should be language-agnostic, especially performance-/load-tests.
Explanation
UI tests should use generic selectors such as tags <p>, <div>, <table>, element Id's <div id="basket"> or CSS classes <p class="message"> for looking up elements. As you're using JMeter, I assume you're on some sort of performance-/load-tests. If so, then you want to look most likely for some action elements to progress your tests.
If you cannot omit some language dependency (for example localized URL-paths), I would suggest using JMeter variables that are set according to the language you're testing with. See here for details
In contrast to performance tests, acceptance or general web UI tests would incorporate testing of some labels. Selenium or other HTML capturing tests are usually backed by some test code written by you or your team. That code can rely on resource bundles, translations, etc. so you can test for the correct labels.
HTH, Mark

How to make a feature run before other

I have two cucumber feature ( DeleteAccountingYear.feature and AddAccountingYear.feature).
How can i do to make that the second feature(AddAccountingYear.feature) run before the first one (AddAccountingYear.feature).
I concur with #alannichols about tests being independent of each other. Thats a fundamental aspect of automation suite. Otherwise we will end up with a unmaintainable, flaky test suite.
To run a certain feature file before running another feature appears to me like a test design issue.
Cucumber provides few options to solve issues like this:
a) Is DeleteAccountingYear.feature really a feature of its own? If not you can use the cucumber Background: option. The steps provided in the background will be run for each scenario in that feature file. So your AddAccountingYear.feature will look like this:
Feature: AddingAccountingYear
Background:
Given I have deleted accounting year
Scenario Outline: add new accounting year
Then I add new account year
b) If DeleteAccountingYear.feature is indeed a feature of its own and needs to be in its own feature file, then you can use setup and teardown functions. In cucumber this can be achieved using hooks. You can tag AddDeleteAccountingYear.feature with a certain tag say #doAfterDeleteAccountYear. Now from the Before hooks you can do the required setup for this specific tag. The before hooks(for ruby) will look like:
Before('#doAfterDeleteAccountYear') do
#Call the function to delete the account year
end
If the delete account year is written as a function, then the only thing required is to call this method in the before hook. This way the code will be DRY compliant as well.
If these options doesn't work for you, one another way of forcing the order of execution is by using a batch/shell script. You can add individual cucumber commands for each feature in the order you would like to execute and then just execute the script. The downside of it is different reports will be generated for each feature file. But this is something that I wouldn't recommend for the reasons mentioned above.
From Justin Ko's website - https://jkotests.wordpress.com/2013/08/22/specify-execution-order-of-cucumber-features/ the run order is determined in the following way:
Alphabetically by feature file directory
Alphabetically by feature file name
Order of scenarios within the feature file
So to run one feature before the other you could change the name of the feature file or put it in a separate feature folder with a name that alphabetically first.
However, it is good practice to make all of your tests independent of one another. One of the easiest way to do this is to use mocks to create your data (i.e. the date you want to delete), but that isn't always an option. Another way would be to create the data you want to delete in the set up of the delete tests. The downside to doing this is that it's a duplication of effort, but it won't matter what order the tests run in. This may not be an issue now, but with a larger test suite and/or multiple coders using the test repo it may be difficult to maintain the test ordering based solely on alphabetical sorting.
Another option would be to combine the add and delete tests. This goes against the general rule that one test should test one thing but is often a pragmatic approach if your tests take a long time to run and adding the add data step to the set up for delete would add a lot of time to your test suite.
Edit: After reading that link to Justin Ko's site you can specify the features that are run when you run cucumber, and it will run them in the order that you give. For any that you don't care about the order for you can just put the whole feature folder at the end and cucumber will run through them, skipping any that have already been run. Copy paste example from the link above -
cucumber features\folder2\another.feature features\folder1\some.feature features

How can I configure the Visual Stuidio 2010 "Test Results" window to automatically expand the "Group by" sections after a test run?

I am currently engaged in a large C#.NET 4.0 project and am doing so with a TDD aproach.
In our unit tests we have adopted a naming pattern based on the one in Roy Overshore's "The Art of Unit Testing" book. Essentially for each class "XXX" we have a coresponding "XXXFacts" test class and each [TestMethod] method in that class is named with a pattern of "[Method/Prop name][sate/result][preconditions]" for example "AccessLevel_IsInvalid_WhenNotAuthenticated"
Now intially to see the test results I just configured the test results window to add the ClassName column which looks OK, but takes up much horizonal screen real estate as you can see;
I then discovered the "Group By" option in the window. This does as the name suggests and grouped the output by class name, so I can remove the repeating coulmn and gain more room for any eror message text.
however every time I run my tests the view I am given has the group by contents collapsed like this;
What I would like to do is somehow configure the Visual Stuidio 2010 Test Results Window to automatically expand the "Group by" sections after a test run so it looks like this immediatley after a test run;
At the very least if it could expand a group that contais a FAILED test that would be a massive plus.
I know this is just plain lazy but opening all those groups after every run on a TDD project is already getting old!
I have looked but failed to find a configuration option for this in the tools dialog, but perhaps I i've been looking in the wrong place. I do hope so.

Automated test with Ruby: select an option from drop-down list

I write automated test with Ruby(Selenium framework) and I need to know how can I select an option from drop-down list.
Thanks in advance!
building on floehopper's answer:
selenium.addSelection(locator, value)
or
selenium.select(locator, value)
You almost certainly want "id=my_select_box_id" (with the quotes) for locator, though other CSS selectors will work. value is the literal text value (not the display value) of the option to be selected.
It sounds like you are writing a functional test here. Selecting it probably won't do you much good on its own. You need to submit the form in order to test the controller. :)
It might help people answering to know which testing framework you are using, because there are several to choose from.
If you are using RSpec, check out this screencast.
Hope that helps anyway.
Aside from functional tests, if you're looking for something that acts a bit more like the real app, have a look at WebRat. For non-AJAXed integration tests, it has a very nice DSL for selection your DOMs and taking appropriate actions against them. (link-clicking form-filling etc.).
On the other hand if your App is an external Web App that you just want to do acceptance tests on, you can also check Selenimum or Watir.
Note that WeRat is heavily web framework based where as Selenimum and Watir use the browser to interact with your web app directly (like a real user).
I think you want this command :-
select(selectLocator, optionLocator)
selectLocator identifies the drop down list
optionLocator identifies the option within the list
Easiest way of doing this: select(selectLocator,optionLocator) as suggested above.
selectLocator: name or xpath for dropdown object
optionLocator: name or xpath for dropdown option to be selected
E.g.
#selenium.select "Language", "label=Ruby"

Resources