TeamCity Failure Condition for ESLint warnings increases does not stop build (using eslint-teamcity) - teamcity

I have a TeamCity step that runs this script:
"teamcity:eslint": eslint src/**/*.{ts,tsx} --format ./node_modules/eslint-teamcity/index.js --max-warnings 0
It uses the eslint-teamcity to format the error/warnings linting result.
This is the package.json configuration:
"eslint-teamcity": {
"reporter": "inspections",
"report-name": "ESLint Violations",
"error-statistics-name": "ESLint Error Count",
"warning-statistics-name": "ESLint Warning Count"
},
I created a test "master" branch with 2 lint warnings and TeamCity "Inspections" shows them:
I have set this Failure Condition:
Now, to test it I created a branch with 3 or 4 lint warnings.
I commit it but the build does not fail despite the number of warnings has increased:
I expect the build to fail.
I've no idea how and where TeamCity store the "inspection" warnings counter for that Failure Condition, so I have no idea how to investigate this unexpected behaviour.
Or, I missed some step/configuration?
TeamCity 2019.2
Failuer Condition code:
failOnMetricChange {
metric = BuildFailureOnMetric.MetricType.INSPECTION_WARN_COUNT
units = BuildFailureOnMetric.MetricUnit.DEFAULT_UNIT
comparison = BuildFailureOnMetric.MetricComparison.MORE
compareTo = build {
buildRule = buildWithTag {
tag = "test-master"
}
}
stopBuildOnFailure = true
}

I took as example of the Failure Condition another TeamCity project that was checking the tests coverage.
I gave for grant (didn't pay attention) that tag: master in the Failure Condition was actually looking at "master" branch for reference.
At the end of the log (many other steps after my try) I finally saw this warning:
Cannot find Latest build with tag: 'test-master', branch filter: feature/test-warnings to calculate metric 'number of inspection warnings' for branch feature/test-warnings
Still not sure if this means also that the comparison cannot be done on another branch, but this answer my question about Failure Condition not working as expected.

Related

What is the easiest way to make Azure pipeline stage fail for debugging purposes

I would like to make a given stage fail to test my conditions
- stage: EnvironmentDeploy
condition: and(succeeded()...)
is it possible to make - stage to fail purposefully?
The easiest way I can think of is adding a job with a PowerShell script, and using the throw keyword to exit the script with an error:
stages:
- stage: StageToFail
jobs:
- job: JobToFail
steps:
- pwsh: throw "Throwing error for debugging purposes"
I assume you don't want this fail to happen consistently as that would make deployment impossible. Assuming it's a one-off thing, why not try to upload code with compile errors so the build fails or - if your code base has them and your pipeline checks them as a prerequisite for deployment - add a unit test that fails.

Jenkins: Catch error to skip to next stage, but still show errored stage has failed

I am currently using Jenkins catchError to catch if an error has occurred within a stage, and if so, then skip to the next stage.
What I would like is to present that if a stage had caught an error, to presented in Jenkins as Red (Failed), even though this stage was skipped to the next.
This is how Jenkins pipeline stage is coded to catch error is script fails:
stage('test_run') {
steps {
catchError {
sh """#!/bin/bash -l
***
npm run test:run:reporter
"""
}
}
}
I found this solution in StackOverflow:
Jenkins: Ignore failure in pipeline build step
This solution works, but in Jenkins' run, the stage that failed is presenting Green (aka Success).
This Jenkins run indicates that it failed:
The following stage actually the cause of the failure and was skipped on caught error, however, this stage is showing Green (Success) and prefer to present it to be Red (Failed):
The final post actions is showing as Yellow (Unstable), when normally it shows as Green (Success):
Thank you for reading, always appreciate the assistance.
You can use
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
Add your commands
}
Add whatever result you want to set for build and stage result.
#DashrathMundkar Thank you
Using catchError() with the buildResult and stageResult, however I set the values to both 'FAILURE', and that did the trick.
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE', message: 'Test Suite had a failure') { }

Check status of last build in openshift using untilEach openshift jenkins client plugin

I want to check the latest build status using the openshift jenkins client plugin. Following the official documentation here
stage('Start build') {
steps {
script {
openshift.withCluster() {
openshift.withProject('my-project') {
openshift.selector("bc", "app_name").startBuild()
}
}
}
script {
openshift.withCluster() {
openshift.withProject('my-project') {
def builds = openshift.selector("bc", "app_name").related('builds')
timeout(5) {
builds.untilEach(1) {
return (it.object().status.phase == "Complete")
}
}
}
}
}
}
}
The above code starts a new build and then checks for all related builds' status to the build config. I want it to check the status of the build that was started.
It checks for all the previous related builds' status to be Complete too. Let's take the below example:
Previous old builds
Build #1 - Complete
Build #2 - Failed
Build #3 - Complete
Build #4 - Complete
When I execute the pipeline in Jenkins - A new Build #5 gets started and I want the above code to only check for the status of Build #5 to be Complete. But this code checks for all the builds (Build #1 to Build #5) to be in the Complete status. Because of that, the pipeline waits until all 5 builds are Complete and eventually times out and jenkins build fails.
I only want it to check the status of the latest (last) build. The documentation doesn't have an example of that, but it should be possible. I can vaguely understand it must be possible by using watch but not sure how to execute it.
Appreciate your help.
After much research turns out the answer was right there in front of my face. Just add the "--wait" argument in the previous startBuild step.
openshift.selector("bc", "app_name").startBuild("--wait")
This will return a non-zero code if the build fails and the stage will also fail.

Storing Artifacts From a Failed Build

I am running some screen diffing tests in one of my Cloud Build steps. The tests produce png files that I would like to view after the build, but it appears to upload artifacts on successful builds.
If my test fail, the process exits with a non-zero code, which results in this error:
ERROR: build step 0 "gcr.io/k8s-skaffold/skaffold" failed: step exited with non-zero status: 1
Which further results in another error
ERROR: (gcloud.builds.submit) build a22d1ab5-c996-49fe-a782-a74481ad5c2a completed with status "FAILURE"
And no artifacts get uploaded.
I added || true after my tests, so it exits successfully, and the artifacts get uploaded.
I want to:
A) Confirm that this behavior is expected
B) Know if there is a way to upload artifacts even if a step fails
Edit:
Here is my cloudbuild.yaml
options:
machineType: 'N1_HIGHCPU_32'
timeout: 3000s
steps:
- name: 'gcr.io/k8s-skaffold/skaffold'
env:
- 'CLOUD_BUILD=1'
entrypoint: bash
args:
- -x # print commands as they are being executed
- -c # run the following command...
- build/test/smoke/smoke-test.sh
artifacts:
objects:
location: 'gs://cloudbuild-artifacts/$BUILD_ID'
paths: [
'/workspace/build/test/cypress/screenshots/*.png'
]
Google Cloud Build doesn't allow us to upload artifacts (or run some steps ) if a build step fails. This is the expected behavior.
There is an already feature request created in Public Issue Tracker to allow us to run some steps even though the build has finished or failed. Please feel free to star it to get all the related updates on this issue.
A workaround per now is as you mentioned using || true after the tests or use || exit 0 as mentioned in this Github issue.

Travis build fail with error 65

I have a Travis CI setup on GitHub. I use it to check my commits for iOS app. The problem is, I very often and randomly get an error 65. I have yet to find a solution.
When I restart the job 2-3 times after it has failed it passes in 90% of times.
I previously also had a problem with logs being too verbose for Travis (>4MB) but I added xcpretty to fix that.
Errors I took from log:
...
Generating 'XYZ.app.dSYM'
❌ error: couldn't remove '/Users/travis/Library/Developer/Xcode/DerivedData/XYZ-aaltcjvmshpmlufpmzdsgbernspl/Build/Products/Debug-iphonesimulator/XYZ.app/SomeName.storyboardc' after command failed: Directory not empty
...
And then at the end of Travis log:
Testing failed:
The file “056-Jj-FAu-view-XmS-Ro-0cO.nib” couldn’t be opened because there is no such file.
error: couldn't remove '/Users/travis/Library/Developer/Xcode/DerivedData/XYZ-aaltcjvmshpmlufpmzdsgbernspl/Build/Products/Debug-iphonesimulator/XYZ.app/SomeName.storyboardc' after command failed: Directory not empty
error: lipo: can't move temporary file: /Users/travis/Library/Developer/Xcode/DerivedData/XYZ-aaltcjvmshpmlufpmzdsgbernspl/Build/Products/Debug-iphonesimulator/XYZ.app.dSYM/Contents/Resources/DWARF/XYZ to file: /Users/travis/Library/Developer/Xcode/DerivedData/XYZ-aaltcjvmshpmlufpmzdsgbernspl/Build/Products/Debug-iphonesimulator/XYZ.app.dSYM/Contents/Resources/DWARF/XYZ.lipo (No such file or directory)
Command /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/dsymutil emitted errors but did not return a nonzero exit code to indicate failure
** TEST FAILED **
The following build commands failed:
LinkStoryboards
LinkStoryboards
(2 failures)
The command "./scripts/build.sh" exited with 65.
I am using Xcode 8 both in Xcode and Travis settings.
Ah, good question. Occasionally, xcodebuild steps that are failing during the codesigning step can be addressed using travis_retry - Travis will retry the step 3 times for any non-zero exit status, which should reduce the need for you to restart it manually. There are some suggested code snippets in the travis-ci/travis-ci GitHub issue on this as well. Good luck!
If you're running into the error code 65 (from random failures) here's a command you can pipe on the end of your xcodebuild (assuming you're running tests) command to get back more consistent results:
(XCODEBUILD_COMMAND_HERE) | awk 'BEGIN {success=0} $0 ~ /.* tests, with 0 failures \(.*/ {success=1} {print $0} END {if(success==0){exit 1}}
This looks for tests, with 0 failures ( in your output text, thus using the text output of xcodebuild instead of the status code of xcodebuild to determine success.
Note: Keep in mind, if you do something like NSLog(' tests, with 0 failures ('); in your code you make get a false positive, it's very unlikely to happen by accident. You may have to update tests, with 0 failures ( in the awk script between updates of xcodebuild. But, having consistent results with xcodebuild is definitely worth that price.

Resources