I use mocha/chai in my meteor project for testing. When one expect fails the following tests in the it function will not be executed. Is it possible to execute the following expects, too?
Related
I have a cypress workflow in Github and it runs nicely. But, when the e2e tests fail for some reason and I want to re-run them using the re-run all jobs button (below), the following message appears:
The run you are attempting to access is already complete and will not accept new groups.
The existing run is: https://dashboard.cypress.io/projects/abcdef/runs
When a run finishes all of its groups, it waits for a configurable set of time before finally completing. You must add more groups during that time period.
The --tag flag you passed was:
The --group flag you passed was: core
What should I change in my configuration to make these possible? Sometimes the e2e fails because of a backend error that is fixed later.
I'd like to do this instead of a force e2e commit.
I was facing the same issue before.
I think you can try to pass GITHUB_TOKEN or add a custom build id. It fixed my issue. Hoep it helps.
https://github.com/cypress-io/github-action#custom-build-id
Check your Cypress Dashboard subscription plan. Mine got the free plan full (500 test for free and I was running in 3 different browsers 57 tests, so it got full pretty quick since this is 171 tests in one run) and after that it didn't allowed me to keep running or re running more parallel tests. Test kept running but in 1 machine out of 4 in the first browser and stages for the other 2 browsers started failing, I was able to allow the pipeline to not be failing by passing continueOnError: true in the configuration.
Quick edit, I don't remember where but I read that you could also add a delay to your pipeline and/or reduce the default wait on the Dashboard which is 60s(https://docs.cypress.io/guides/guides/parallelization#Run-completion-delay)
Our automation tests run in gitlab CI environment. We have a regression suite of around 80 tests.
If a test fails due to some intermittent issue, the CI job fails and since the next stage is dependent on the Regression one, the pipeline gets blocked.
We retry the job to rerun regression suite expecting this time it will pass, but some other test fails this time.
So, my question is:
Is there any capability using which on retrying the failed CI job, only the failed tests run (Not the whole suite)?
You can use the retry keyword when you specify the parameters for a job, to define how many times the job can be automatically retried: https://docs.gitlab.com/ee/ci/yaml/#configuration-parameters
[Retry Only Failed Scenarios]
Yes, but it depends. let me explain. I'll mention the psuedo-steps which can be performed to retry only failed scenarios. The steps are specific to pytest, but can be modified depending on the test-runner.
Execute the test scenarios with --last-failed. At first, all 80 scenarios will be executed.
The test-runner creates a metadata file containing a list of failed tests. for example, pytest creates a folder .pytest_cache containing lastfailed file with the list of failed scenarios.
We now have to add the .pytest_cache folder in the GitLab cache with the key=<gitlab-pipeline-id>.
User checks that there are 5 failures and reruns the failed job.
When the job is retried it will see that now .pytest_cache folder exists in the GitLab cache and will copy the folder to your test-running directory. (shouldn't fail if the cache doesn't exist to handle the 1st execution)
you execute the same test cases with the same parameter --last-failed to execute the tests which were failed earlier.
In the rerun, 5 test cases will be executed.
Assumptions:
The test runner you are using creates a metadata file like pytest.
POC Required:
I have not done POC for this but in theory, it looks possible. The only doubt I have is how Gitlab parses the results. Ideally in the final result, all 80 scenarios should be pass. If it doesn't work out this way, then we have to have 2 jobs. execute tests -> [manual] execute failed tests to get 2 parsed results. I am sure with 2 stages, it will definitely work.
You can use Retry Analyser. This will help you definitely.
I'm trying to Abort/Stop/Cancel test run from TestInitialize.
What I'm trying is to check if test method has custom attribute and if so ,abort the test.
TestContext doesn't have this ability ,
Is there any solutions to abort test using code?
Thanks
If you want to stop the test run, you can call Process.GetCurrentProcess().Kill() or Environment.Exit. This will stop the current process (where the tests are running).
If you want to stop only the test which is about to run. You not group your tests so that only the relevant tests are executed:
How to: Group and Run Automated Tests Using Test Categories
My group will be implementing CI using Jenkins. As such, I want to make sure that any unit and/or integration tests we create integrate easily with Jenkins. We have several different technologies in our stack we are using from C++ code to Oracle PL/SQL packages to Groovy code. We want to develop test drivers (code that wraps and tests these individual code units) that we can integrate with Jenkins so that these tests are automatically run when we perform commits (git) as well as on a nightly basis. My question is, what are the best practices for writing these test drivers so that they will easily integrate with Jenkins when we implement it?
For example, we have have a PL/SQL stored procedure that we want to run tests against as part of our CI testing. I could write a bash shell script that wraps calls to it, I could write a Java program that calls it. Basically I could wrap it in anything. Then the next question is...is there some sort of standard for outputting results so that Jenkins can easily determine if the test passed or failed?
.is there some sort of standard for outputting results so that Jenkins
can easily determine if the test passed or failed?
If your test results are compliant with Junit results,jenkins have junit plugin which give you the better way for tracing test reports (result trend graph) and also test result archiving. converting ant test log to Junit format easier one.
useful links:
http://nose2.readthedocs.org/en/latest/plugins/junitxml.html
https://wiki.jenkins-ci.org/display/JENKINS/JUnit+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/xUnit+Plugin
Jenkins and JUnit
Basically I could wrap it in anything.
I personally prefer to go with Java among your choices.because it give you better Api to create xml files
Use python unittest to wrap any of your tests.
Produce junit xml test results.
One easy way of getting any python unittest to write out junit is from command-line.
yum install pytest
And call your test script like this:
py.test --junitxml result.xml testscript.py
And in jenkins build configuration Post-build actions Add a "Publish JUnit test result report" action with result.xml and any more test result files you produce.
https://docs.python.org/2.7/library/unittest.html
This is just one way of producing junit xml results with python. There are a good few other methods either using unittest module or junitxml or others.
i want to call some function once before all test started, and i need to know which tests are going to run. For example, if i selected TestMethod1 and TestMethod3 in my test plan, and run those two testcases, i need to get the test method information of 'TestMethod1' and 'TestMethod3'.
Is there any way to do that??