Go-CD - How do you stop generating artifacts when JUNIT or JASMINE or Regression Test fails in Go-CD - continuous-integration

We are actively using GO-CD. We get JUNIT JASMINE and other results, how ever the build artifacts are always published by go-cd which is picked by other agents to perform automated deployment.
We wish to set percentage value markers for JUNIT JASMINE etc, and if the observed value is lesser than the % marker, then we are interested to make go-cd not publish artifacts.
Any ideas?

Ideally after report creation another task kicks in which verifies the report results.
It could be e.g. a grep command inside a shell script looking for the words fail or error in the XML report files. As soon as the task finishes with a return code not equal to 0, GoCD considers the task to be failed.
Same applies for the percentage marker, a task is needed which calculates the percentage and then provides an appropriate return code. 0 when percentage goal is met or exceeded and different from 0 when the goal has not been met. This one could also be implemented as a custom task such as a shell script evaluating the reports.
The pipeline itself can be configured to not publish any artifacts in case the task fails or errors.

Related

Is there a way to get unique jacoco .exec file names?

Currently at my internship, I have my Jacoco-Maven plugin set up such that after the unit and integration reports are generated, they are merged into one.
Now, adding CircleCI into the mix, we run parallel jobs for our tests. These jobs cause only longest running test's .exec file to be read as the basis for the report, meaning all other tests are ignored, and our coverage would be lowered.
I thought the solution would be to have unique names, and then just merge all .exec file using a wildcard character within my POM, but there seems to be no documentation on it.

How do I read and compare jmeter results in CI/CD pipeline

I don't have any experience in non functional testing. But I have just written a jmeter test and hooked up in gitlab ci. I am generating a testresults.jtl and added in artifacts.
But I am not sure how to read the results and how to compare it with the previous results to see or get notified if there are any changes in performance.
What should I do?
You can consider using Taurus tool which:
Has JUnit XML Reporter producing JUnit-style XML result files which can be "understood" by GitLab CI
Has Pass/Fail Criteria subsystem where you can specify the error thresholds, if i.e. response time will be higher than the defined value Taurus will stop with non-zero exit status code so GitLab automatically will fail the build on getting non-zero exit code.

How to retry only failed tests in the CI job run on Gitlab?

Our automation tests run in gitlab CI environment. We have a regression suite of around 80 tests.
If a test fails due to some intermittent issue, the CI job fails and since the next stage is dependent on the Regression one, the pipeline gets blocked.
We retry the job to rerun regression suite expecting this time it will pass, but some other test fails this time.
So, my question is:
Is there any capability using which on retrying the failed CI job, only the failed tests run (Not the whole suite)?
You can use the retry keyword when you specify the parameters for a job, to define how many times the job can be automatically retried: https://docs.gitlab.com/ee/ci/yaml/#configuration-parameters
[Retry Only Failed Scenarios]
Yes, but it depends. let me explain. I'll mention the psuedo-steps which can be performed to retry only failed scenarios. The steps are specific to pytest, but can be modified depending on the test-runner.
Execute the test scenarios with --last-failed. At first, all 80 scenarios will be executed.
The test-runner creates a metadata file containing a list of failed tests. for example, pytest creates a folder .pytest_cache containing lastfailed file with the list of failed scenarios.
We now have to add the .pytest_cache folder in the GitLab cache with the key=<gitlab-pipeline-id>.
User checks that there are 5 failures and reruns the failed job.
When the job is retried it will see that now .pytest_cache folder exists in the GitLab cache and will copy the folder to your test-running directory. (shouldn't fail if the cache doesn't exist to handle the 1st execution)
you execute the same test cases with the same parameter --last-failed to execute the tests which were failed earlier.
In the rerun, 5 test cases will be executed.
Assumptions:
The test runner you are using creates a metadata file like pytest.
POC Required:
I have not done POC for this but in theory, it looks possible. The only doubt I have is how Gitlab parses the results. Ideally in the final result, all 80 scenarios should be pass. If it doesn't work out this way, then we have to have 2 jobs. execute tests -> [manual] execute failed tests to get 2 parsed results. I am sure with 2 stages, it will definitely work.
You can use Retry Analyser. This will help you definitely.

How to re-run only failed tests (failed data row) in visual studio test task?

We have a build pipeline for our automated scripts (selenium) in Azure-Devops. And we use browserstack to run all our scripts. Sometimes we get timeout issue, no matter what we implement additional options to browser settings we still get timeout. So we decided to re run the tests to certain limit and then the pass percentage went high and scripts were passed without any issues.
So we have different data rows for each test method. Now when a test method for a particular data row is failed then the entire test (including all the passed data row) are getting executed again in the re-run which is unnecessary since some are passed.
Is there any way to run only failed data row test ?
As seen in below screenshot on the regular attempt, data row 0 is failed and others passed. So it goes for re-run and running all the passed test again.
Test result screenshot
Note: Using Batch option to re-run the failed tests. Also tried with Automatic batch option where it failed to re-run because of test name issue of vstest.console.exe (There is formatting issue if the test name has spaces or round braces)

Can JMeter Assert on Throughput?

Is it possible to have a Maven/Jenkins build fail due to JMeter tests failing to achieve a specified throughput?
I'm tinkering with the jmeter-maven-plugin in order to do some basic performance tests as part of a continuous integration process in Jenkins.
One can set a Duration Assertion on a Sampler in the JMeter test to mark the sample as failed if a response time is over a certain threshold. What I'd like is to be able to fail a build (in Maven, or in the Jenkins job that triggers the Maven build) based on the total throughput for a test.
Is this possible with any existing plugins?
Yes its possible. You can use the Jenkins Text Finder plugin and the JMeter "aggregate report". With aggregate report you can write a CSV or XML file. You can search this file for your throughput with the Jenkins Text Finder Plugin and then you can mark the build as failed or unstable. Alternativly, you can use a BASH script to search the generated JMeter report file and return a non null return value. This will make your build fail.

Resources