Dealing with expected failures in tests - ruby

Background: I have a relatively large Cucumber test suite. The problem is that there are several test cases that will fail due to known bugs that probably won't be fixed for a month or two. This means that whenever myself or someone else needs to run the test suite, we get several failures and then have to spend time digging through the test results and figuring out which ones were expected and which ones are new.
The quick and dirty solution is to simply comment the test cases out. The problem I have with that is that when the bugs are fixed there is no guarantee that the commented out test case will be uncommented.
Question: Is there a simple method in Cucumber to separate the expected failures from the unexpected ones?

You can tag them as #wip
The default cucumber call will ignore the #wip scenarios
#wip
Scenario: Something
Btw
rake cucumber:ok #will run all the scenarions except the #wip ones
rake cucumber:wip #will run just the #wip tagged scenarios
rake cucumber #same behavior as rake cucumber:ok

Related

Output flakey scenarios in Cucumber output

I'm running tests in cucumber using the --retries N option to reattempt failed tests N times to catch some tests which are failing inconsistently. Currently the summary after running these tests in the terminal is something like this:
100 scenarios (2 failed, 5 flaky, 1 skipped, 98 passed)
588 steps (9 failed, 24 skipped, 555 passed)
11m45.859s
Failing Scenarios:
cucumber features/some_feature.feature:13 # Scenario: AC.1 Some scenario
cucumber features/some_feature.feature:54 # Scenario: AC.6 Some other scenario
This lets me know what's failing, however I'd like to also have a list of the flakey scenarios to help me diagnose what is failing inconsistently. Is there a way to set up Cucumber such that this is the case?
The scenarios listed are the scenarios that fail the build (making the exit code non-zero), if you use the option "--strict" or "--strict-flaky" the flaky scenarios will also be listed in the summary ("--strict" will also list the pending and undefined scenarios).
It's currently not possible to see Flaky scenarios in the summary.
In order to change this, someone would have to submit a pull request changing console_issues.rb, and possibly associated tests.

Need to execute a step (each feature may have diff step) only once before a Cucumber feature file

I want to execute a specific step only once before each cucumber feature files. A cucumber feature files can have multiple scenarios. I don't want Background steps here which executes before each scenario. Every feature file can have a step (which is different in each feature) which will execute only once. So i can't use that step into before hook as i have a specific step for every 20 features. Sample Gherkin shows below:
Scenario: This will execute only once before all scenario in this current feature
When Navigate to the Page URL
Scenario: scenario 1
When Some Action
Then Some Verification
Scenario: scenario 2
When Some Action
Then Some Verification
Scenario: scenario 3
When Some Action
Then Some Verification
I hope you guys understand my Question. I am using Ruby Capybara Cucumber in my framework.
Cucumber doesn't really support what you are asking about. A way to implement this with cucumber hooks would be to use these two pieces of doc:
https://github.com/cucumber/cucumber/wiki/Hooks#tagged-hooks
https://github.com/cucumber/cucumber/wiki/Hooks#running-a-before-hook-only-once
You would tag all your feature files appropriately and you can implement tagged Before hooks that execute once on a per feature tag basis.
It's not beautiful but it accomplishes what you want without waiting on a feature request (or using a different tool).
This can be achieved by associating a Before, After, Around or AfterStep hook with one or more tags. Examples:
Before('#cucumis, #sativus') do
# This will only run before scenarios tagged
# with #cucumis OR #sativus.
end
This must be in the top 5 most frequent questions on the Cucumber mailing list. You can do what you want with hooks. However you almost certainly should not do what you want. The execution time you save by taking this approach is totally outweighed by the amount of time and effort it will take to debug the intermittent failures that such an approach generally leads to.
One of the foundations of creating automated tests is to start from a consistent place. When you have code that setups key things in scenarios, but that is not run for every scenario you have to do the following:
Ensure your setup code creates a consistent base to start from (this is easy)
Ensure that every scenario that uses this base, does not modify the base in any way at all (this is very very difficult)
In your example you'd have to ensure that every action in every scenario ends up on your original page URL. If just one scenario fails to do that, then you will end up with intermittent failures, and you will have to go through every scenario to find your culprit.
In general it is much easier and more effective to put your effort into making your setup code FAST enough so that you are not worried about running it before each scenario.
Yes, This can be done by passing the actual value in you feature file and using "(\\d+)" in you java file. Look at below shown code for better understanding.
Scenario: some test scenario
Given whenever a value is 50
In myFile.java, write the step definition as shown below
#Given("whenever a value is (\\d+)$")
public void testValueInVariable(int value) throws Throwable {
assertEqual(value, 50);
}
you can also have a look at below link to get more clear picture:
https://thomassundberg.wordpress.com/2014/05/29/cucumber-jvm-hello-world/
Some suggestions have been given, especially the one quoting the official documentation which uses a global variable to store whether or not initial setup has been run.
For my case, where multiple features were executed one after another, I had to reset the variable again by checking whether scenario.feature.name has changed:
$feature_name ||= ''
$is_setup ||= false
Before do |scenario|
current_feature_name = scenario.feature.name rescue nil
if current_feature_name != $feature_name
$feature_name = current_feature_name
$is_setup = false
end
end
$is_setup can then be used in steps to determine whether any initial setup needs to be done.

How to fail fast only specific rspec test script?

I have a test suite of rspec tests which are divided into different files.
Every file represents one test scenario with some number of test steps.
Now, on some particular tests, it can happen that specific step fails but it is too time consuming and not needed to run rest of the steps in that scenario.
I know there is an option --fail-fast in rspec but if I'm running tests like: rspec spec/* that will mean that when first step fails in any script, it will abort complete execution.
I'm just looking for mechanism to abort execution of that specific test scenario (test script) when failure happens but to continue execution of other test scenarios.
Thanks for the help,
Bakir
Use the RSpec-instafail gem.
According to its documentation, it:
Show failing specs instantly. Show passing spec as green dots as usual.

How to run Jasmine tests in serial

I use jasmine to test my server side code and i need to run tests in serial, not in parallel.
My tests need to make CRUD operation in database. If test are executed in parallel i can't ensure that the database is in a good context for my test
Unless you explicitly choose to create asynchronous tests in Jasmine, everything in Jasmine happens sequentially, in the sense that one test runs only after its preceding test has finished. And if you do write asynchronous tests, then parts of your single test may run in parallel, but you still have the constraint that one test runs only after its preceding test has finished.
However, there are a couple caveats to be aware of:
In an async test if your code exceeds Jasmine's timeout period, you might still have code running when Jasmine decides to give up on that test and proceed to the next. (Thanks to #Gregg for this tip; see this answer.)
"JavaScript is usually considered to have a single thread of execution... however, in reality this isn't quite true, in sneaky nasty ways." I am quoting #bobince from this answer.

What results can I force in a cucumber scenario

Using ruby / cucumber, I know you can explicitly call a fail("message"), but what are your other options?
The reason I ask is that we have 0... I repeat, absolutly NO control over our test data. We have cucumber tests that test edge cases that we may or may not have users for in our database. We (for obvious reasons) do not want to throw away the tests, because they are valuable; however since our data set cannot test that edge case, it fails because the sql statement returns an empty data set. Right now, we just have those tests failing, however I would like to see something along the lines of "no_data" or something like that if the sql statement returns an empty data set. So the output would look like
Scenarios: 100 total (80 passed, 5 no_data, 15 fail)
I am willing to use the already implemented "skipped" if there is a skip("message") function.
What are my options so we can see that with the current data, we just don't have any test data for those tests? making these manual tests is also not an option. They need to be run ever week with our automation, but somehow separate from the failures. Failure means defect, no_data found means it's not a testable condition. It's the difference between a warning: we have not tested this edge case, and Alert: broken code.
You can't invoke 'skipped', but you can certainly call pending with or without an error message. I've used this in a similar situation to yours. Unless you're running in strict mode then having pending scenarios won't cause any failures. The problem I encountered was that occasionally a step would get mis-spelled causing cucumber to mark that as pending, since it was not matching a step definition. That then became lost in the sea of 'legitimate' pending scenarios and was weeks before we discovered it.

Resources