I have set up a pair of Azure pipelines such successful completion of the first pipeline triggers the second pipeline. The first pipeline publishes a small JSON file to Azure Artifacts, and then the second pipeline downloads the JSON file.
Here are the two pipelines.
Pipeline one:
# Pipeline one
trigger:
- '*'
pool:
name: 'Default'
demands:
# I use this property to make sure it runs on the correct build agent
- Can_do_builds -equals true
steps:
- script: |
echo This is Pipeline One.
echo Running on $(Agent.MachineName)
echo Running in $(Pipeline.Workspace)
displayName: 'Display Pipeline One info'
- powershell: |
$json = #"
{
'build_id': '$(Build.BuildID)',
'build_number': '$(Build.BuildNumber)',
'build_type': '$(Build.Reason)',
'source_repo': '$(Build.Repository.Name)',
'source_branch': '$(Build.SourceBranchName)',
'source_commit_id': '$(Build.SourceVersion)'
}
"#
$f = '$(Pipeline.Workspace)/s/dropfile.json'
Add-Content -Path $f -Value $json
Write-Host Contents of $f
Write-Host "================"
Get-Content $f
displayName: Create the dropfile
- publish: dropfile.json
artifact: theDropfile
displayName: Publish the dropfile
Pipeline two:
# Pipeline two
trigger:
- master
pool:
name: 'Default'
demands:
# I use this property to make sure it runs on the other build agent
- Can_do_integration_tests -equals true
resources:
pipelines:
- pipeline: pipeline-one
source: my_workspace.pipeline-one
trigger:
enabled: true
branches:
include:
- master
- develop
- release_*
- passing-info-btwn-pipelines
steps:
- script: |
echo This is Pipeline Two.
echo Running on $(Agent.MachineName)
echo Running in $(Pipeline.Workspace)
echo Build reason is $(Build.Reason)
echo Triggering resource is $(Resources.TriggeringAlias)
echo Triggering category is $(Resources.TriggeringCategory)
displayName: 'Display Pipeline Two info'
# - task: DownloadPipelineArtifact#2
# displayName: Download the dropfile
# inputs:
# source: 'specific'
# project: 'QA'
# pipeline: 'my_workspace.pipeline-one' # if it will accept strings
# # pipeline: 12 # if it won't accept strings
# preferTriggeringPipeline: 'true'
# runVersion: 'latest'
# artifact: theDropfile
# path: '$(Pipeline.Workspace)/s/'
- download: pipeline-one
artifact: theDropfile
patterns: '**/*.json'
displayName: Download the dropfile the other way
- powershell: |
$f = "$(Pipeline.Workspace)/s/dropfile.json"
if( Test-Path $f ) {
Get-Content $f
} else {
Write-Host '$f not found'
}
displayName: Read the dropfile
Everything worked fine until our IT gang did two things:
Removed Just-In-Time.
Added our two VMs (self-hosted VMs, running Windows Server 2016 I think) to our companyname.local domain.
Pipeline one (the publishing one) still works. Every run publishes the artifact. I can verify that by navigating through the build log to the artifact link and downloading it.
But pipeline two (the downloading one) doesn't work anymore. It tries for 18 minutes to download the artifact, and then gives up. The logfile doesn't give much information, but it looks like the Azure Artifacts server is rejecting the agent's HTTP request. The entire contents of the logfile are as follows:
Starting: Download the dropfile the other way
==============================================================================
Task : Download pipeline artifact
Description : Download a named artifact from a pipeline to a local path
Version : 1.198.0
Author : Microsoft Corporation
Help : Download a named artifact from a pipeline to a local path
==============================================================================
Download from the specified build: #4927
Download artifact to: E:\acmbuild1\_work\5/pipeline-one/theDropfile
Using default max parallelism.
Max dedup parallelism: 192
ApplicationInsightsTelemetrySender will correlate events with X-TFS-Session 05a9e36f-885e-4a2b-9944-a4cfa8cc11f3
DedupManifestArtifactClient will correlate http requests with X-TFS-Session 05a9e36f-885e-4a2b-9944-a4cfa8cc11f3
Minimatch patterns: [**/*.json]
DedupManifestArtifactClient will correlate http requests with X-TFS-Session 05a9e36f-885e-4a2b-9944-a4cfa8cc11f3
Minimatch patterns: [**/*.json]
DedupManifestArtifactClient will correlate http requests with X-TFS-Session 05a9e36f-885e-4a2b-9944-a4cfa8cc11f3
Minimatch patterns: [**/*.json]
DedupManifestArtifactClient will correlate http requests with X-TFS-Session 05a9e36f-885e-4a2b-9944-a4cfa8cc11f3
Minimatch patterns: [**/*.json]
ApplicationInsightsTelemetrySender correlated 2 events with X-TFS-Session 05a9e36f-885e-4a2b-9944-a4cfa8cc11f3
##[error]No such host is known.
Finishing: Download the dropfile the other way
At first I thought that the failure was with the DownloadPipelineArtifact#2 task, which is why I tried using the download task instead. But I believe they're both the same code under the hood. In any case, the failure mode and the error message are the same.
What is causing the download failure? How can I fix it -- or how can the IT team fix it?
“No such host is known“ from your logfile seems like that the VMs added to your company domain can’t access to Azure DevOps.
You may ask IT teams to add certain Ips and URLs to allowlist. Comman domain urls(e.g) dev.azure.com and *.dev.azure.com
please refer to doc:Allowed address lists and network connections
Related
I am learning how to create azure pipeline and ran into the following error:
The pipeline is not valid. Job Phase_1: Step
AzureResourceGroupDeployment input ConnectedServiceName expects a
service connection of type AzureRM but the provided service connection
"MY-SERVICE-CONNECTION-NAME" is of type generic.
What am I missing here?
azure-pipelines.yml
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
branches:
include:
- master
paths:
include:
- cosmos
batch: True
jobs:
- job: Phase_1
displayName: Phase 1
cancelTimeoutInMinutes: 1
pool:
vmImage: ubuntu-latest
steps:
- checkout: self
- task: AzureResourceGroupDeployment#2
displayName: Azure Deployment:Create Or Update Resource Group action on DISPLAY-NAME
inputs:
# azureSubscription: 'SUBSCRIPTION'
ConnectedServiceName: MY-SERVICE-CONNECTION-NAME
resourceGroupName: DISPLAY-NAME
location: West US # TBD
csmFile: cosmos/deploy.json
csmParametersFile: cosmos/parameters-dev.json
deploymentName: DEPLOYMENT-NAME
I tried values from "service connections" but not sure what is the issue here.
The error message is telling you the exact problem. Your service connection needs to be an Azure Resource Manager service connection. Create a service connection of the appropriate type.
I can reproduce the issue:
As Daniel said, this is caused by the service connection type.
From this document you can know what the parameters are:
https://github.com/microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureResourceGroupDeploymentV2/README.md#parameters-of-the-task
Share a little trick. Can help you avoid this type of problem in the future. When you type '- task: sometask#version' in the correct place of YML file of the pipeline in DevOps, you will see a 'Settings' button in the upper left, click it and you can set the value through the UI, which can filter the appropriate options for you:
Context: performing Android instrumentation (UI) tests with Azure Pipelines.
There are 2 jobs: one does the testing (launches an emulator and runs the tests), and the other job reports an error, if the previous job fails for some reason.
I have the following simple setup in my Azure Pipelines:
jobs:
- job: SmokeTesting
displayName: Smoke testing
timeoutInMinutes: 60
pool:
vmImage: 'macOS-latest'
steps:
- script: meta/scripts.sh launch_avd
displayName: Launch AVD
workingDirectory: ''
- task: Gradle#2
displayName: Run smoke tests
inputs:
workingDirectory: ''
gradleWrapperFile: 'gradlew'
publishJUnitResults: true
tasks: ':app:connectedAndroidTest'
- job: ReportFailure
displayName: Report failure
dependsOn:
- SmokeTesting
condition: or(failed(), canceled())
steps:
- script: meta/scripts.sh report_smoke_tests_error
workingDirectory: ''
env:
BUILD_ID: $(Build.BuildId)
It all works as expected: if there is an error, the second job is run. In this case, the log in Azure Pipelines Web contains very useful information, that I would like to have access to in the second job:
* What went wrong:
Execution failed for task ':app:stripDebugDebugSymbols'.
> No version of NDK matched the requested version 22.0.7026061. Versions available locally: 18.1.5063045, 21.3.6528147, 21.3.6528147
How do I get "What went wrong" message in my second job?
My idea is to use multi-stage variable to record the message in the first job, and then use it in the second one. Unfortunately, I haven't figured out how to get this message in the first place.
As a workaround, you can use the Build Timeline api to get detailed build information. The api response contains the property issues, you can check the results there if there is an errors.
https://dev.azure.com/{org}/{pro}/_apis/build/builds/3838/timeline/?api-version=6.0
If the issues does not contain the error message you want, you can retrieve the content related to Gradle#2 task in the response body. Obtain the log url according to the property log.
By calling this log url, you can get the log of Gradle#2 task, and then parse the log to get the desired message.
Situation
I'm working on an AzureDevops Server 2020 with only one agent.
I have 2 build pipelines:
Build pipeline (yaml)
Merge pipeline (yaml)
Each pipeline contains multiple stages that contains only one job (because each task of the stage must run on same agent).
Current behavior
If I run the two pipelines at the same time, the agent run the two in a "fake" parallel that make the two builds very slow.
Exemple of agent process order:
build-stage1, build-stage2, merge-stage1, merge-stage2, merge-stage3, merge-stage4, build-stage3...
Wanted behavior
This is not something unexpected if we have more agents than build executions. But this will never be my case.
So I will prefer to lock the agent for the current build (like built-in in Jenkins).
Exemple of agent wanted process order:
build-stage1, build-stage2, build-stage3, build-stage4, build-stage5(latest), merge-stage1, merge-stage2, merge-stage3, merge-stage4, merge-stage5(latest)
Is it possible to set the agent work attribution policy ?
This occurs because a job is not added to agent queue if it depends on something (and by default it depends on previous stage).
Using dependsOn: [] let Azure Devops know that it depends on nothing so each jobs are added to the queue and are executed in FIFO order.
Agree with Dom.
Based on my test, I could reproduce this behavior. But I am afraid that there is no method to lock the agent to complete the pipeline before taking another one.
For a workaround:
You could use Pipeline trigger to trigger the merge Pipeline.
Build Pipeline:
pool:
name: Default
stages:
- stage: Build_Stage1
displayName: Stage1
....
- stage: Build_Stage2
displayName: Stage2
dependsOn: Build_Stage1
....
- stage: Build_Stage3
displayName: Stage3
dependsOn: Build_Stage2
....
Merge Pipeline:
resources:
pipelines:
- pipeline: TestTrigger
source: ABC
trigger:
branches:
- '*'
pool:
name: Default
stages:
- stage: Merge_Stage1
displayName: Merge1
...
- stage: Merge_Stage2
displayName: Merge2
dependsOn: Merge_Stage1
...
- stage: Merge_Stage3
displayName: Merge3
dependsOn: Merge_Stage2
...
In this case, you could queue the Build Pipeline alone. Then the Merge Pipeline will be triggered after completing the Build Pipeline.
The Process: build-stage1, build-stage2, build-stage3, build-stage4, build-stage5(latest), -> Tiggeer -> merge-stage1, merge-stage2, merge-stage3, merge-stage4, merge-stage5(latest)
On the other hand, this requirement is valuable.
To get this feature, you could add your request for this feature on our UserVoice site, which is our main forum for product suggestions. Thank you for helping us build a better Azure DevOps.
I am running some screen diffing tests in one of my Cloud Build steps. The tests produce png files that I would like to view after the build, but it appears to upload artifacts on successful builds.
If my test fail, the process exits with a non-zero code, which results in this error:
ERROR: build step 0 "gcr.io/k8s-skaffold/skaffold" failed: step exited with non-zero status: 1
Which further results in another error
ERROR: (gcloud.builds.submit) build a22d1ab5-c996-49fe-a782-a74481ad5c2a completed with status "FAILURE"
And no artifacts get uploaded.
I added || true after my tests, so it exits successfully, and the artifacts get uploaded.
I want to:
A) Confirm that this behavior is expected
B) Know if there is a way to upload artifacts even if a step fails
Edit:
Here is my cloudbuild.yaml
options:
machineType: 'N1_HIGHCPU_32'
timeout: 3000s
steps:
- name: 'gcr.io/k8s-skaffold/skaffold'
env:
- 'CLOUD_BUILD=1'
entrypoint: bash
args:
- -x # print commands as they are being executed
- -c # run the following command...
- build/test/smoke/smoke-test.sh
artifacts:
objects:
location: 'gs://cloudbuild-artifacts/$BUILD_ID'
paths: [
'/workspace/build/test/cypress/screenshots/*.png'
]
Google Cloud Build doesn't allow us to upload artifacts (or run some steps ) if a build step fails. This is the expected behavior.
There is an already feature request created in Public Issue Tracker to allow us to run some steps even though the build has finished or failed. Please feel free to star it to get all the related updates on this issue.
A workaround per now is as you mentioned using || true after the tests or use || exit 0 as mentioned in this Github issue.
I am trying to integrate CircleCI plugin with my spring-pet-clinic project. I was following the instruction on CircleCI web page. I have created .circleci folder inside my project root folder.
Inside .circleci I have added config.yml file and copy-pasted config from CircleCI page
My config was like this:
# Use the latest 2.1 version of CircleCI pipeline processing engine,
see https://circleci.com/docs/2.0/configuration-reference/
version: 2.1
# Use a package of configuration called an orb, see
https://circleci.com/docs/2.0/orb-intro/
orbs:
# Declare a dependency on the welcome-orb
welcome: circleci/welcome-orb#0.3.1
# Orchestrate or schedule a set of jobs, see
https://circleci.com/docs/2.0/workflows/
workflows:
# Name the workflow "Welcome"
Welcome:
# Run the welcome/run job in its own container
jobs:
- welcome/run
After I ran the project CircleCI has thrown an error. Especially this one: "Config Processing Error: Don't rerun"
$#!/bin/sh -eo pipefail
# No configuration was found in your project. Please refer to
https://circleci.com/docs/2.0/ to get started with your
configuration.
#
# -------
# Warning: This configuration was auto-generated to show you
the
message above.
# Don't rerun this job. Rerunning will have no effect.
false
Exited with code 1
Spin Up Environment looks like this
Build-agent version 1.0.10572-3ce00c85 (2019-04-
15T22:09:28+0000)
Docker Engine Version: 17.05.0-ce
Kernel Version: Linux b0a81c56acff 4.4.0-144-generic
#170~14.04.1-
Ubuntu SMP Mon Mar 18 15:02:05 UTC 2019 x86_64 Linux
Starting container bash:4.4.19
using image
bash#sha256:9f0a4aa3c9931bd5fdda51b1b2b74a0398a8eabeaf9519d807e010b9d9d41993
Using build environment variables
BASH_ENV=/tmp/.bash_env-5cbebf83d4b030000849b60f-0-build
CI=true
CIRCLECI=true
CIRCLE_BRANCH=master
CIRCLE_BUILD_NUM=5
CIRCLE_BUILD_URL=https://circleci.com/gh/sajmon2325/Spring-
Pet-
Clinic/5
CIRCLE_COMPARE_URL=
CIRCLE_JOB=Build Error
CIRCLE_NODE_INDEX=0
CIRCLE_NODE_TOTAL=1
CIRCLE_PREVIOUS_BUILD_NUM=4
CIRCLE_PROJECT_REPONAME=Spring-Pet-Clinic
CIRCLE_PROJECT_USERNAME=sajmon2325
CIRCLE_REPOSITORY_URL=git#github.com:sajmon2325/Spring-Pet-
Clinic.git
CIRCLE_SHA1=48f6db114b41c338e606de32d8648c64ba5119fd
CIRCLE_SHELL_ENV=/tmp/.bash_env-5cbebf83d4b030000849b60f-0-
build
CIRCLE_STAGE=Build Error
CIRCLE_USERNAME=sajmon2325
CIRCLE_WORKFLOW_ID=2789d93e-f1e4-4c81-93f1-846f7d38c107
CIRCLE_WORKFLOW_JOB_ID=670105ca-617e-445e-9b5e-6ac57f6af8da
CIRCLE_WORKFLOW_UPSTREAM_JOB_IDS=
CIRCLE_WORKFLOW_WORKSPACE_ID=2789d93e-f1e4-4c81-93f1-
846f7d38c107
CIRCLE_WORKING_DIRECTORY=~/project
Using environment variables from project settings and/or
contexts
CIRCLE_JOB=**REDACTED**
So at first I thought that I have only a skeleton of CircleCI configuration, that's why I have edited my config.yml file to look like this (the actual version)
# Java Maven CircleCI 2.0 configuration file
#
# Check https://circleci.com/docs/2.0/language-java/ for more details
#
version: 2
jobs:
build:
docker:
# specify the version you desire here
- image: circleci/openjdk:11-browsers-legacy
# Specify service dependencies here if necessary
# CircleCI maintains a library of pre-built images
# documented at https://circleci.com/docs/2.0/circleci-images/
# - image: circleci/postgres:9.4
working_directory: ~/repo
environment:
# Customize the JVM maximum heap limit
MAVEN_OPTS: -Xmx3200m
steps:
- checkout
# Download and cache dependencies
- restore_cache:
keys:
- v1-dependencies-{{ checksum "pom.xml" }}
# fallback to using the latest cache if no exact match
is
found
- v1-dependencies-
- run: mvn install -DskipTests
- run: mvn dependency:go-offline
- save_cache:
paths:
- ~/.m2
key: v1-dependencies-{{ checksum "pom.xml" }}
# run tests!
- run: mvn integration-test
But even this is not working. I still have the same error:
$#!/bin/sh -eo pipefail
# No configuration was found in your project. Please refer to https://circleci.com/docs/2.0/ to get started with your configuration.
#
# -------
# Warning: This configuration was auto-generated to show you the message above.
# Don't rerun this job. Rerunning will have no effect.
false
Exited with code 1
I just need to successfully integrate CircleCi plugin with my project. If you need to see my repo, here is the link: https://github.com/sajmon2325/Spring-Pet-Clinic.git
The problem is that .circleci is not in the root of the repository. It is currently in sfg-pet-clinic/, and the CircleCI build process won't find it there.