What is the easiest way to make Azure pipeline stage fail for debugging purposes - yaml

I would like to make a given stage fail to test my conditions
- stage: EnvironmentDeploy
condition: and(succeeded()...)
is it possible to make - stage to fail purposefully?

The easiest way I can think of is adding a job with a PowerShell script, and using the throw keyword to exit the script with an error:
stages:
- stage: StageToFail
jobs:
- job: JobToFail
steps:
- pwsh: throw "Throwing error for debugging purposes"

I assume you don't want this fail to happen consistently as that would make deployment impossible. Assuming it's a one-off thing, why not try to upload code with compile errors so the build fails or - if your code base has them and your pipeline checks them as a prerequisite for deployment - add a unit test that fails.

Related

Pass "What went wrong" Gradle error message to another Job in Azure Pipelines

Context: performing Android instrumentation (UI) tests with Azure Pipelines.
There are 2 jobs: one does the testing (launches an emulator and runs the tests), and the other job reports an error, if the previous job fails for some reason.
I have the following simple setup in my Azure Pipelines:
jobs:
- job: SmokeTesting
displayName: Smoke testing
timeoutInMinutes: 60
pool:
vmImage: 'macOS-latest'
steps:
- script: meta/scripts.sh launch_avd
displayName: Launch AVD
workingDirectory: ''
- task: Gradle#2
displayName: Run smoke tests
inputs:
workingDirectory: ''
gradleWrapperFile: 'gradlew'
publishJUnitResults: true
tasks: ':app:connectedAndroidTest'
- job: ReportFailure
displayName: Report failure
dependsOn:
- SmokeTesting
condition: or(failed(), canceled())
steps:
- script: meta/scripts.sh report_smoke_tests_error
workingDirectory: ''
env:
BUILD_ID: $(Build.BuildId)
It all works as expected: if there is an error, the second job is run. In this case, the log in Azure Pipelines Web contains very useful information, that I would like to have access to in the second job:
* What went wrong:
Execution failed for task ':app:stripDebugDebugSymbols'.
> No version of NDK matched the requested version 22.0.7026061. Versions available locally: 18.1.5063045, 21.3.6528147, 21.3.6528147
How do I get "What went wrong" message in my second job?
My idea is to use multi-stage variable to record the message in the first job, and then use it in the second one. Unfortunately, I haven't figured out how to get this message in the first place.
As a workaround, you can use the Build Timeline api to get detailed build information. The api response contains the property issues, you can check the results there if there is an errors.
https://dev.azure.com/{org}/{pro}/_apis/build/builds/3838/timeline/?api-version=6.0
If the issues does not contain the error message you want, you can retrieve the content related to Gradle#2 task in the response body. Obtain the log url according to the property log.
By calling this log url, you can get the log of Gradle#2 task, and then parse the log to get the desired message.

AzureDevops agent must finish current build before taking an other one

Situation
I'm working on an AzureDevops Server 2020 with only one agent.
I have 2 build pipelines:
Build pipeline (yaml)
Merge pipeline (yaml)
Each pipeline contains multiple stages that contains only one job (because each task of the stage must run on same agent).
Current behavior
If I run the two pipelines at the same time, the agent run the two in a "fake" parallel that make the two builds very slow.
Exemple of agent process order:
build-stage1, build-stage2, merge-stage1, merge-stage2, merge-stage3, merge-stage4, build-stage3...
Wanted behavior
This is not something unexpected if we have more agents than build executions. But this will never be my case.
So I will prefer to lock the agent for the current build (like built-in in Jenkins).
Exemple of agent wanted process order:
build-stage1, build-stage2, build-stage3, build-stage4, build-stage5(latest), merge-stage1, merge-stage2, merge-stage3, merge-stage4, merge-stage5(latest)
Is it possible to set the agent work attribution policy ?
This occurs because a job is not added to agent queue if it depends on something (and by default it depends on previous stage).
Using dependsOn: [] let Azure Devops know that it depends on nothing so each jobs are added to the queue and are executed in FIFO order.
Agree with Dom.
Based on my test, I could reproduce this behavior. But I am afraid that there is no method to lock the agent to complete the pipeline before taking another one.
For a workaround:
You could use Pipeline trigger to trigger the merge Pipeline.
Build Pipeline:
pool:
name: Default
stages:
- stage: Build_Stage1
displayName: Stage1
....
- stage: Build_Stage2
displayName: Stage2
dependsOn: Build_Stage1
....
- stage: Build_Stage3
displayName: Stage3
dependsOn: Build_Stage2
....
Merge Pipeline:
resources:
pipelines:
- pipeline: TestTrigger
source: ABC
trigger:
branches:
- '*'
pool:
name: Default
stages:
- stage: Merge_Stage1
displayName: Merge1
...
- stage: Merge_Stage2
displayName: Merge2
dependsOn: Merge_Stage1
...
- stage: Merge_Stage3
displayName: Merge3
dependsOn: Merge_Stage2
...
In this case, you could queue the Build Pipeline alone. Then the Merge Pipeline will be triggered after completing the Build Pipeline.
The Process: build-stage1, build-stage2, build-stage3, build-stage4, build-stage5(latest), -> Tiggeer -> merge-stage1, merge-stage2, merge-stage3, merge-stage4, merge-stage5(latest)
On the other hand, this requirement is valuable.
To get this feature, you could add your request for this feature on our UserVoice site, which is our main forum for product suggestions. Thank you for helping us build a better Azure DevOps.

Storing Artifacts From a Failed Build

I am running some screen diffing tests in one of my Cloud Build steps. The tests produce png files that I would like to view after the build, but it appears to upload artifacts on successful builds.
If my test fail, the process exits with a non-zero code, which results in this error:
ERROR: build step 0 "gcr.io/k8s-skaffold/skaffold" failed: step exited with non-zero status: 1
Which further results in another error
ERROR: (gcloud.builds.submit) build a22d1ab5-c996-49fe-a782-a74481ad5c2a completed with status "FAILURE"
And no artifacts get uploaded.
I added || true after my tests, so it exits successfully, and the artifacts get uploaded.
I want to:
A) Confirm that this behavior is expected
B) Know if there is a way to upload artifacts even if a step fails
Edit:
Here is my cloudbuild.yaml
options:
machineType: 'N1_HIGHCPU_32'
timeout: 3000s
steps:
- name: 'gcr.io/k8s-skaffold/skaffold'
env:
- 'CLOUD_BUILD=1'
entrypoint: bash
args:
- -x # print commands as they are being executed
- -c # run the following command...
- build/test/smoke/smoke-test.sh
artifacts:
objects:
location: 'gs://cloudbuild-artifacts/$BUILD_ID'
paths: [
'/workspace/build/test/cypress/screenshots/*.png'
]
Google Cloud Build doesn't allow us to upload artifacts (or run some steps ) if a build step fails. This is the expected behavior.
There is an already feature request created in Public Issue Tracker to allow us to run some steps even though the build has finished or failed. Please feel free to star it to get all the related updates on this issue.
A workaround per now is as you mentioned using || true after the tests or use || exit 0 as mentioned in this Github issue.

How to pass an exit code from custom testing framework to CircleCI in order to fail the step if necessary

I created the automated testing plugin for Godot named WAT. It has a command line interface that outputs 0 (success) or 1 (failure) on the last line when run.
I'm looking for a way to pass that number onto CircleCI so that the step fails if it was 1.
I'm working in a bash environment with the following config.yml
version: 2
jobs:
build:
docker:
- image: barichello/godot-ci:3.1.1
steps:
- checkout
- run:
name: Run Tests
command: godot -s addons/WAT/CLI.gd -run_all
You can set OS.exit_code to a non-zero number and the step should fail.
If that is not acceptable, you can parse your output and fail manually from Bash with exit 1.

How to fail Gitlab pipeline that calls another pipeline via API?

I have 2 Gitlab repos:
Project A
Integration tests for Project A
I want to stop the pipeline / build of Project A if the integration tests fail but currently the Project A pipeline passes even if the integration tests fail.
My .gitlab-ci.yml for Project A defines these 7 stages:
stages:
- build
- test
- publish
- dev-deployment
- staging-deployment
- trigger-integration-tests
- prod-deployment
The second last stage (trigger-integration-tests) kicks off the integration tests project by using the Gitlab API call with curl:
trigger-integration-tests:
stage: trigger-integration-tests
image: ubuntu:16.04
script:
- apt-get update && apt-get install -y curl
- "curl -X POST -F token=$INTEGRATION_TESTS_TOKEN -F variables[PROJECT_ID]=$CI_PROJECT_ID -F variables[BRANCH_NAME]=$CI_COMMIT_REF_NAME -F ref=master https://gitlab.mycompany.com/api/v4/projects/123/trigger/pipeline"
allow_failure: false
only:
- master
I tried adding the allow_failure: false flag but that didn't help so I'm looking for more ideas.
I found the trigger-and-wait technique but wasn't sure if there's a more simple solution.
As answered on a previous question, you could do the following:
From the main project, using a Python/Bash script:
Trigger the integration tests pipeline (and capture the pipeline ID)
Poll the status of the pipeline, using the captured ID (which can be running, pending, failed, canceled or skipped)
Raise an exception / error if it has failed...
See here for an example python script to achieve this.

Resources