I've being using Ginkgo for a while and I have found a behavior I don't really understand. I have a set of specs that I only want to run if and only if a condition is available. If the condition is not available I want to skip the test suite.
Something like this:
ginkgo.BeforeSuite(func(){
if !CheckCondition() {
ginkgo.Skip("condition not available")
}
}
When the suite is skipped this counts as a failure.
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 0 Skipped
I assumed there should be one tests to be considered to be skipped. Am I missing something? Any comments are welcome.
Thnaks
I think you are using Skip method incorrectly. It should be use inside spec like below, not inside BeforeSuite. When used inside spec it does show up as "skipped" in the summary.
It("should do something, if it can", func() {
if !someCondition {
Skip("special condition wasn't met")
}
})
https://onsi.github.io/ginkgo/#the-spec-runner
Related
Is it possible to run specific part of the test in Cypress over and over again without execution of whole test case? I got error in the second part of test case and first half of it takes 100s. It means I have to wait 100s every time to get to the point where the error occurs. I would like rerun test case just few steps before error occurs. So once again, my question is: Is it possible to do in Cypress? Thanks
Workaround #1
If you are using cucumber in cypress you can modify your scenario to a Scenario Outline that will execute Nth times with a scenario tag:
#runMe
Scenario Outline: Visit Google Page
Given that google page is displayed
Examples:
| nthRun |
| 1 |
| 2 |
| 3 |
| 4 |
| 100 |
After that run the test in the terminal by running through tags:
./node_modules/.bin/cypress-tags run -e TAGS='#runMe'
Reference: https://www.npmjs.com/package/cypress-cucumber-preprocessor?activeTab=versions#running-tagged-tests
Workaround #2
Cypress does have retry capability but it would only retry the scenario during failure. You can force your scenario to fail to retry it Nth times with a scenario tag:
In your cypress.json add the following configuration:
{
"retries": {
// Configure retry attempts for `cypress run`
// Default is 0
"runMode": 99,
// Configure retry attempts for `cypress open`
// Default is 0
"openMode": 99
}
}
Reference: https://docs.cypress.io/guides/guides/test-retries#How-It-Works
Next is In your feature file, add an unknown step definition on the last step of your scenario to make it fail:
#runMe
Scenario: Visit Google Page
Given that google page is displayed
And I am an uknown step
Then run the test through tags:
./node_modules/.bin/cypress-tags run -e TAGS='#runMe'
For a solution that doesn't require adding a change to the config file, you can pass retries as a param to specific tests that are known to be flakey for acceptable reasons.
https://docs.cypress.io/guides/guides/test-retries#Custom-Configurations
Meaning you can write (from docs)
describe('User bank accounts', {
retries: {
runMode: 2,
openMode: 1,
}
}, () => {
// The per-suite configuration is applied to each test
// If a test fails, it will be retried
it('allows a user to view their transactions', () => {
// ...
}
it('allows a user to edit their transactions', () => {
// ...
}
})```
am trying to skip all other tests from all spec files if one test fails and found a working solution over here Is there a reliable way to have Cypress exit as soon as a test fails?. However, this looks to be working only if the test fails in it() assertions. How can we skip the tests if somethings fails in beforeach()
For eg:
before(() => {
cy.get('[data-name="email-input"]').type(email);
cy.get('[data-name="password-input"]').type(email);
cy.get('[data-name="account-save-btn"]').click();
});
And if something goes wrong (for eg: CypressError: Timed out retrying: Expected to find element: '[data-name="email-input"]', but never found it.) in above code then stop/ skip all tests in all spec files.
Just in case anyone is looking answer for the same question. I have found a solution and would like to share.
To implement the solution I have used a cookie that I will set to value true if something fails and before executing each test cypress will check the value of cookie. If the value of cookie is true it skips the test.
Cypress.on('fail', error => {
document.cookie = "shouldSkip=true" ;
throw error;
});
function stopTests() {
cy.getCookie('shouldSkip').then(cookie => {
if (cookie && typeof cookie === 'object' && cookie.value === 'true') {
Cypress.runner.stop();
}
});
}
beforeEach(stopTests);
Also to note: Tests should be written in it() block and avoid using before() to write tests
As of Cypress 10, tests don't run if before or beforeEach hook fails.
I would like to get timing information on how long it took a whole *.java class to run AND timing information on each test as well in the gradle output. Is there a way to do that with gradle?
Currently, I just have
beforeTest{ descr ->
logger.warn("Starting Test ${descr.className} : ${descr.name}")
}
It depends on your intent. For debugging purposes, I usually run gradle with --profile flag, which generates the full report of task execution times. See Gradle Command Line.
If you wish to do something ad-hoc with times, you'd need to code the desired behavior. For example, this will print execution time for each test:
test {
afterTest { descriptor, result ->
def totalTime = result.endTime - result.startTime
println "Total time of $descriptor.name was $totalTime"
}
}
See also:
Testing
TestResult
A variation of the accepted answer with rounding millis to seconds:
test {
afterSuite { descriptor, result ->
def duration = java.util.concurrent.TimeUnit.MILLISECONDS
.toSeconds(result.endTime - result.startTime)
println "Total duration of $descriptor: $duration seconds"
}
}
I would like to use a serverspec check and run it against two acceptable outcomes, so that if either passes then the check passes. I want my check to pass if the exit status of the command is either 0 or 1. Here is my check:
describe command("rm /var/tmp/*.test") do
its(:exit_status) { should eq 0 }
end
Right now it can only check if the exit status is 0. How can I change my check to use either 0 or 1 as an acceptable exit status?
Use a compound matcher.
its(:exit_status) { should eq(0).or eq(1) }
Below are a few lines from my test case. The first assertion comes back as false, but why? The second does not.
result=Parser.parse_subject(##lexicon.scan("kill princess"), Pair.new(:noun, "bear"))
assert_equal(Parser.parse_subject(##lexicon.scan("kill princess"), Pair.new(:noun, "bear")),Parser.parse_subject(##lexicon.scan("kill princess"), Pair.new(:noun, "bear")))
assert_equal(result,result)
Here is the actual error:
Run options:
# Running tests:
.F.
Finished tests in 0.004000s, 750.0000 tests/s, 1750.0000 assertions/s.
1) Failure:
test_parse_subject(ParserTests) [test_fournineqa.rb:30]:
Sentence:0x21ad958 #object="princess", #subject="bear", #verb="kill" expect
ed but was
Sentence:0x21acda0 #object="princess", #subject="bear", #verb="kill".
3 tests, 7 assertions, 1 failures, 0 errors, 0 skips
It looks like you have defined a class Sentence but have provided no way to compare two Sentence instances, leaving assert_equal comparing the identities of two objects to discover that they are not the same instance.
A simple fix would be something like:
class Sentence
def ==(sentence)
#subject == sentence.subject and
#verb == sentence.verb and
#object == sentence.object
end
end
The first assertion compares two different objects with the same content whereas the second assertion compares two identical objects. Apparently equal in this context means "identical objects". (Check the implementation.)