I have two cucumber feature ( DeleteAccountingYear.feature and AddAccountingYear.feature).
How can i do to make that the second feature(AddAccountingYear.feature) run before the first one (AddAccountingYear.feature).
I concur with #alannichols about tests being independent of each other. Thats a fundamental aspect of automation suite. Otherwise we will end up with a unmaintainable, flaky test suite.
To run a certain feature file before running another feature appears to me like a test design issue.
Cucumber provides few options to solve issues like this:
a) Is DeleteAccountingYear.feature really a feature of its own? If not you can use the cucumber Background: option. The steps provided in the background will be run for each scenario in that feature file. So your AddAccountingYear.feature will look like this:
Feature: AddingAccountingYear
Background:
Given I have deleted accounting year
Scenario Outline: add new accounting year
Then I add new account year
b) If DeleteAccountingYear.feature is indeed a feature of its own and needs to be in its own feature file, then you can use setup and teardown functions. In cucumber this can be achieved using hooks. You can tag AddDeleteAccountingYear.feature with a certain tag say #doAfterDeleteAccountYear. Now from the Before hooks you can do the required setup for this specific tag. The before hooks(for ruby) will look like:
Before('#doAfterDeleteAccountYear') do
#Call the function to delete the account year
end
If the delete account year is written as a function, then the only thing required is to call this method in the before hook. This way the code will be DRY compliant as well.
If these options doesn't work for you, one another way of forcing the order of execution is by using a batch/shell script. You can add individual cucumber commands for each feature in the order you would like to execute and then just execute the script. The downside of it is different reports will be generated for each feature file. But this is something that I wouldn't recommend for the reasons mentioned above.
From Justin Ko's website - https://jkotests.wordpress.com/2013/08/22/specify-execution-order-of-cucumber-features/ the run order is determined in the following way:
Alphabetically by feature file directory
Alphabetically by feature file name
Order of scenarios within the feature file
So to run one feature before the other you could change the name of the feature file or put it in a separate feature folder with a name that alphabetically first.
However, it is good practice to make all of your tests independent of one another. One of the easiest way to do this is to use mocks to create your data (i.e. the date you want to delete), but that isn't always an option. Another way would be to create the data you want to delete in the set up of the delete tests. The downside to doing this is that it's a duplication of effort, but it won't matter what order the tests run in. This may not be an issue now, but with a larger test suite and/or multiple coders using the test repo it may be difficult to maintain the test ordering based solely on alphabetical sorting.
Another option would be to combine the add and delete tests. This goes against the general rule that one test should test one thing but is often a pragmatic approach if your tests take a long time to run and adding the add data step to the set up for delete would add a lot of time to your test suite.
Edit: After reading that link to Justin Ko's site you can specify the features that are run when you run cucumber, and it will run them in the order that you give. For any that you don't care about the order for you can just put the whole feature folder at the end and cucumber will run through them, skipping any that have already been run. Copy paste example from the link above -
cucumber features\folder2\another.feature features\folder1\some.feature features
Related
This is a how-to/best-practice question.
I have a code base with a suite of unit tests run with pytest
I have a set of *.rst files which provide explanation of each test, along with a table of results and images of some mathematical plots
Each time the pytest suite runs, it dynamically updates the *.rst files with the results of the latest test data, updating numerical values, time-stamping the tests, etc
I would like to integrate this with the project docs. I could
Build these rst files separately with sphinx-build whenever I want to view the test results [this seems bad, since it's labor intensive and not automated]
tell Sphinx to render these pages separately and include them in the project docs [better, but I'm not sure how to configure this]
have a separate set of sphinx docs for the test results which I can build after each run of the test suite
Which approach (or another approach) is most effective? Is there a best practice for doing this type of thing?
Maybe take a look into Sphinx-Test-Reports, which reads in all information from junit-based xml-files (pytest supports this) and generates the output during the normal sphinx build phase.
So you are free to add custom information around the test results.
Example from webpage:
.. test-report:: My Report
:id: REPORT
:file: ../tests/data/pytest_sphinx_data_short.xml
So complete answer to your question: Take none of the given approaches and let a sphinx-extension do it during build-time.
As I learned from DevGuide testing ReSharper plugins works as follows:
Plugin is loaded and test input file is passed to it
Plugin performs it's actions on the passed file
ReSharper's test environment writes plugin actions results to .tmp file in a special format that depends on the type of functionality tested (for example, if we test completion, .tmp file will contain the list of generated completion items)
ReSharper's test environment compares .tmp file with .gold file to decide if test is failed or succeeded
But I need the following scenario. The first two steps are the same as the above ones, then:
I write code that obtains the results of plugin's actions and check are they what I'm expected so I can make test fail if needed
How can I achieve this?
I need it because I have a code that uses AST generated by ReSharper to build some graphs and I want to test are the graphs built correctly.
Yes, you can do this. You need to create your own test base class, instead of using one of the provided ones.
There is a hierarchy of base classes, each adding extra functionality. Usually, you'll derive from something like QuickFixAvailabilityTestBase or QuickFixTestBase, which add the functionality for testing quick fixes. These are the classes that will do something and write the output to a .tmp file that is then compared to the .gold file.
These classes themselves derive from something like BaseTestWithSingleProject, which provides the functionality to setup an in-memory solution and project that's populated with files you specify in your test, or BaseTestWithTextControl which also gives you a text control for the file you're testing. If you derive from this class directly (or with your own custom base class), you can perform the action you need for the actual test, and either assert something in memory, or write the appropriate text to the .tmp file to compare against the .gold.
You should override the DoTest method. This will give you an IProject that is already set up, and you can do whatever you need to in order to test your extension's functionality. You can use project.Solution.GetComponent<> to get at any shell or solution component, and use the ExecuteWithGold method to execute something, write to the .tmp file and have ReSharper compare to the .gold file for you.
How do I conditionally skip a scenario?
For example, I wish to continue a scenario only if certain conditions are met, but I do not want it to register as a failure if it's not present.
This is an issue I had. The tests I write are against a UI that has a constantly changing BE database that I am currently unable to have static data in.
This means that some times it is possible that there is no data for the test.
Not a pass not a fail, just unable to run.
The way that I found to work best was to invoke a cucumber pending.
example test:
Scenario: Test the application
Given my application has data
When I test something
Then I get a result
example step def:
Given /^my application has data$/ do
pending unless application.has_data?
end
These are the kind of results I can see:
201 scenarios (15 pending, 186 passed)
1151 steps (15 pending, 1136 passed)
It's worth noting that I have extra debugging and have these tests tagged so that at any time I can run these pending tests again.
Hope this helps,
Ben.
For anyone still looking for an answer to this:
Apart from using pending, or a specific profile to skip scenarios with certain tags, there are at least 2 more ways to achieve this.
I can understand why you would need this, as I had a similar problem and got a solution, hence worth sharing.
In my case, I had a piece of functionality expected to be available on 3/10 devices, and expected to be not available on the remaining 7.
Caveats with using 'pending' to skip:
Since the tests and code were implemented, it didn't feel right to mark steps as pending.
It caused confusion, as it was difficult to distinguish really pending scenarios from skipped but marked pending scenarios at the end of a run
Some CI jobs (Jenkins/Hudson) might be configured to fail for pending scenarios, hence causing more trouble.
So, I rather wanted to just skip them during execution depending on the condition of which browser is being used. I also didn't want to have too many profiles specific to certain browsers/devices
Solution:
Use cucumber.yml to skip tagged scenarios conditionally
Here's a known ignored interesting fact about cucumber (from https://github.com/cucumber/cucumber/wiki/cucumber.yml):
The cucumber.yml file is preprocessed by ERb; this allows you to use ruby code to generate values in the cucumber.yml file
Building on this, tag your scenarios with something unique, say #conditional
At the beginning of your cucumber config (cucumber.yml), apply your conditional logic outside of any profiles mentioned:
<% included = (ENV['BROWSER'] =~ /chrome/) ? "-t #conditional" : "-t ~#conditional" %>
included is just a variable, which will have a value of tags to include/exclude depending on the condition
Now use this conditional variable in the default profile
default: <%= included %>
So now your default profile will use the included/excluded tests as identified by your conditional logic.
(More complicated and not elegant) Use rake tasks for cucumber execution:
Conditionally choose tags to include/exclude within your rake task, and pass them to cucumber execution.
Hope this helps.
You could check the condition before you start cucumber, then use a profile that would skip the scenarios with certain tags. Put this in your cucumber.yml:
default: --tags ~#wip --tags ~#broken --no-source --color
limited: --tags #core --tags ~#wip --tags ~#broken --no-source --color
Replace #core with whatever tag you use for the cukes you want to run (or use ~ to exclude cukes). Then run the limited profile from a shell script that checks the conditions:
cucumber -p limited
Please see this solution which truly skips the scenario instead of trowing a pending error:
Before do |scenario|
scenario.skip_invoke!
end
I am tagging my scenarios, and then in my "step_definitions/hooks.rb" file, I have something like this:
Before('#proxy') do
skip_this_scenario unless proxy_running?
end
scenario.skip_invoke! which was mentioned in another answer seems to be deprecated.
In Test Manager 2010 It seems that I can't order test cases in a Requirement based Test Suite (ordering a test cases in normal test suite works fine)
Please anyone explain me why they forbid me to do that, or suggest a way to work around
Thanks a lot
How to order / reorganize order of testcases?
Ordnen von Testcases / Testfällen
Ändern der Reihenfolge von Testcases / Testfällen
Workaround
create 2 suites, one of them query based
customize the query of the query based suite and run it
select all needed testcases and copy them to the clipboard
insert them in the manual suite order them by
use "order" from the menue-bar by changing the values in criteria order
delete the query based suite
have fun
http://msdn.microsoft.com/de-de/library/dd997699.aspx
Just ran into this myself. It turns out that requirement-based test suites are a special case of a query-based suite, which also cannot be ordered. All test cases linked to a requirement / user story will show in the corresponding test suite.
As for work-arounds to the default sort order (apparently, the test case's work item ID), I couldn't find any. However, you could approximate the requirement-based query with a regular query, and then use the query-based suite's sort order capability. There are two problems with this: (1) the requirements that a test case is associated with are not available in the query editor, so you'd have to use some other criteria; and (2) you still don't have ad-hoc test case ordering control like you would with a non-query test suite.
I didn't find anything about adding any ordering capability to requirement-based test suites in an upcoming version, or in any third-party tool. Might be something to look into creating.
refs:
http://blogs.msdn.com/b/vstsqualitytools/archive/2009/06/04/no-more-missed-requirements.aspx
http://msdn.microsoft.com/en-us/library/dd286578.aspx
When i run cucumber it displays the
possible steps that i should define, an example from the RSpec book:
1 scenario (1 undefined)
4 steps (4 undefined)
0m0.001s
You can implement step definitions for undefined steps with these snippets:
Given /^I am not yet playing$/ do
pending
end
When /^I start a new game$/ do
pending
end
Then /^the game should say “Welcome to CodeBreaker”$/ do
pending
end
Then /^the game should say “Enter guess:”$/ do
pending
end
Is there a way that it will automaticly create the step definitions file, so i don't have to
rewrite or copy paste by hand but i can just to customize them to be more generic?
Cucumber doesn't offer this feature. Probably because you would have to tell it where to put the step definitions file, and what to name it.
Like Kevin said, Cucumber would have to know the name of the file to put it in, and there are no good defaults to go with, other than using the same file name as the feature file. And that is something I consider an antipattern: http://wiki.github.com/aslakhellesoy/cucumber/feature-coupled-steps-antipattern
Intellij Idea or RubyIDE does exactly what you are asking for:
Detects missing step definitions
Creates missing step definitions in a new file (you choose the name of the file) or in one of the existing step definition files
Highlights matched step parameters
see http://i48.tinypic.com/10r63o4.gif for a step by step picture
Enjoy
There is a possibility this kind of feature could be useful, but as Kevin says, it doesn't exist at present. But it could also get quite messy, quite quickly.
Maybe you already do this, but there's nothing stopping you cut and pasting the output direct into your text editor, or even piping the output direct to your text editor if you're so inclined. Then at least you're getting pretty much most of the way there, bar creating the file and naming.
try this https://github.com/unxusr/kiwi it auto generate your feature file and make the step definitions file for you and you just fill in the steps with code.
In later version will write the code of the steps and run the test all of that automagically
You can use a work around way to generate steps file
all you have to do is to run the Cucumber on a feature doesn't have defined steps by identify a specific feature as the following command:
1) using path
bundle exec cucumber {PATH}
note path would start with features/....
for example
features/users/login.feature
1) using tags
bundle exec cucumber --tags=#{TAG}
note tag should be above your scenario in the steps file
for example
#TAG
Scenario:
And you will have the suggested steps in the console with pending status