Is this the correct way to write if..else statement in cloudbuild.yaml file? - bash

I am trying to deploy a cloud function using cloudbuild.yaml. It works fine if I don't use any conditional statement. I am facing an error when I execute my cloudbuild.yaml file with if conditional statement. What is the correct way to write it. Below is my code:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
id: deploy
args:
- '-c'
- 'if [ $BRANCH_NAME != "xoxoxoxox" ]
then
[
'functions', 'deploy', 'groups',
'--region=us-central1',
'--source=.',
'--trigger-http',
'--runtime=nodejs8',
'--entry-point=App',
'--allow-unauthenticated',
'--service-account=xoxoxoxox#appspot.gserviceaccount.com'
]
fi'
dir: 'API/groups'
Where am I doing it wrong ?

From the github page, https://github.com/GoogleCloudPlatform/cloud-sdk-docker, the entrypoint is not set to gcloud. So you cannot specify the arguments like that.
Good practice for specifying directory is to start with /workspace
Also the right way to write the step should be
steps:
- name: 'gcr.io/cloud-builders/gcloud'
id: deploy
dir: '/workspace/API/groups'
entrypoint: bash
args:
- '-c'
- |
if [ $BRANCH_NAME != "xoxoxoxox" ]
then
gcloud functions deploy groups
--region=us-central1
--source=.
--trigger-http
--runtime=nodejs8
--entry-point=App
--allow-unauthenticated
--service-account=xoxoxoxox#appspot.gserviceaccount.com
fi

I'm not sure you can do this.
In my case, I use the branch selector in the Cloud build trigger to select which branch (or tag) I want to build from a pattern.

I wanted to delete the latest version of each service only if there were more than two previous versions. This was my solution.
args:
- "-c"
- |
if [[ $(gcloud app versions list --format="value(version.id)" --service=MY-SERVICE | wc -l) -ge 2 ]];
then
gcloud app versions list --format="value(version.id)" --sort-by="~version.createTime" --service=admin | tail -n -1 | xargs gcloud app versions delete --service=MY-SERVICE --quiet;
fi

Related

How to use an anchor to prevent repetition of code sections?

Say I have a number of jobs that all do similar series of scripts, but need a few variables that change between them:
test a:
stage: test
tags:
- a
interruptible: true
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
script:
- echo "env is $(env)"
- echo etcetera
- echo and so on
- docker build -t a -f Dockerfile.a .
test b:
stage: test
tags:
- b
interruptible: true
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
script:
- echo "env is $(env)"
- echo etcetera
- echo and so on
- docker build -t b -f Dockerfile.b .
All I need is to be able to define e.g.
- docker build -t ${WHICH} -f Dockerfile.${which} .
If only I could make an anchor like:
.x: &which_ref
- echo "env is $(env)"
- echo etcetera
- echo and so on
- docker build -t $WHICH -f Dockerfile.$WHICH .
And include it there:
test a:
script:
- export WHICH=a
<<: *which_ref
This doesn't work and in a yaml validator I get errors like
Error: YAMLException: cannot merge mappings; the provided source object is unacceptable
I also tried making an anchor that contains some entries under script inside of it:
.x: &which_ref
script:
- echo "env is $(env)"
- echo etcetera
- echo and so on
- docker build -t $WHICH -f Dockerfile.$WHICH .
This means I have to include it from one step higher up. So this does not error, but all this accomplishes is cause the later declared script section to override the first one.
So I'm losing hope. It seems like I will just need to abstract the sections away into their own shell scripts and call them with arguments or whatever.
The YAML merge key << is a non-standard extension for YAML 1.1, which has been superseded by YAML 1.2 about 14 years ago. Usage is discouraged.
The merge key works on mappings, not on sequences. It cannot deep-merge. Thus what you want to do is not possible to implement with it.
Generally, YAML isn't designed to process data, it just loads it. The merge key is an outlier and didn't find its way into the standard for good reasons. You need a pre- or postprocessor to do complex processing, and Gitlab CI doesn't offer anything besides simple variable expension, so you're out of luck.

How do I assign exe output to a variable in gitlab ci scripts?

When running my gitlab ci I need to check whether a specified svn directory exists.
I was using the script:
variables:
DIR_CHECK: "default"
stages:
- setup
- test
- otherDebugJob
.csharp:
only:
changes:
- "**/*.cs"
- "**/*.js"
setup:
script:
- $DIR_CHECK = $(svn ls https://server.fsl.local:port/svn/myco/personal/TestNotReal --depth empty)
- echo $DIR_CHECK
test:
script:
- echo "DIR_CHECK is blank"
- echo $DIR_CHECK
rules:
- if: $DIR_CHECK == ''
otherDebugJob:
script:
- echo "DIR_CHECK is not blank"
- echo $DIR_CHECK
rules:
- if: $DIR_CHECK != ''
the svn command works and echos back the correct reply but $DIR_CHECK does not get set to anything but the original default. It does not store the returned string from the svn command.
How do I store the returned string from an exe in a variable in gitlab ci?
Test run:
Executing "step_script" stage of the job script 00:00 $ $DIR_CHECK =
$(svn ls https://server.fsl.local:port/svn/myco/personal/TestNotReal
--depth empty) svn: E170000: Illegal repository URL https://server.fsl.local:port/svn/myco/personal/TestNotReal' $ echo
$DIR_CHECK Cleaning up file based variables 00:01 Job succeeded
Passing variables between jobs
Unfortunately, you cannot use DIR_CHECK variable the way you described. List of steps to be executed generates before steps actually runs, that means for all of the steps DIR_CHECK will be equal to default. First of all there are few tips how you can pass variables between jobs:
First way
You can add desired command to the before_script section in your .csharp template:
.csharp:
before_script:
- export DIR_CHECK=$(svn ls https://server.fsl.local:port/svn/myco/personal/TestNotReal --depth empty)
and extend other steps with this .csharp.
Second way
You can pass variables between jobs with job artifacts:
setup:
stage: setup
script:
- DIR_CHECK=$(svn ls https://server.fsl.local:port/svn/myco/personal/TestNotReal --depth empty)
- echo "DIR_CHECK=$DIR_CHECK" > dotenv_file
artifacts:
reports:
dotenv:
- dotenv_file
Thirds way
You can trigger or use parent/child pipelines to pass variables into pipelines.
staging:
variables:
DIR_CHECK: "you are awesome, guys!"
stage: deploy
trigger: my/deployment
In the triggered pipeline your variable will exists at the very start moment, and all the rules will be applied correctly.
Solution
In your case, if you really don't want to include otherDebugJob step in your pipeline you can do the following:
First approach
This is quite easy way and this will work, but looks like not a best practice. So, we are already know how to pass our DIR_CHECK variable from setup step , just add some check in the test step script block:
script:
- |
if [ -z "$DIR_CHECK" ]; then
exit 0
fi
- echo "DIR_CHECK is blank"
- echo $DIR_CHECK
Do the almost same thing for the otherDebugJob but check if DIR_CHECK is not empty with if [ -n "$DIR_CHECK" ].
This approach is helpful when your pipeline not contains a lot of steps, but after the test and otherDebugJob follows another few steps.
Second approach
You can fail your setup step and then handle this fail in otherDebugJob step:
setup:
script:
- DIR_CHECK=$(svn ls https://server.fsl.local:port/svn/myco/personal/TestNotReal --depth empty)
- |
if [ -z "$DIR_CHECK" ]; then
exit 1
fi
otherDebugJob:
script:
- echo "DIR_CHECK is not blank"
when: on_failure
This approach is useful if you only want to make some debug stuff after this setup step. After all on_failure jobs, pipeline will be marked as Failed and stopped.

Bitbucket pipelines how to merge two variables to produce another variable to be used somewhere else

I am trying to workout a Bitbucket pipeline using the bitbucket-pipelines.yml
image: microsoft/dotnet:sdk
pipelines:
branches:
master:
- step:
script:
- dotnet build $PROJECT_NAME
- export EnvrBuild=Production_$BITBUCKET_BUILD_NUMBER
- '[ ! -e "$BITBUCKET_CLONE_DIR/$EnvrBuild" ] && mkdir $BITBUCKET_CLONE_DIR/$EnvrBuild'
- dotnet publish $PROJECT_NAME --configuration Release
- cp -r $BITBUCKET_CLONE_DIR/$PROJECT_NAME/bin/Release/netcoreapp2.1/publish/** $BITBUCKET_CLONE_DIR/$EnvrBuild
artifacts:
- $EnvrBuild/**
I am new to pipelines in Bitbucket. When I do an echo of $EnvrBuild I get the result right, but the $EnvrBuild does not have anything in the subsequent steps and it does not produce any artifacts, how ever if I hard code the values, it works. Is there a way to do something like $BITBUCKET_BUILD_NUMBER+"_"+$BITBUCKET_BRANCH ? (I know this is wrong, but you get the idea of what I am trying to achieve. Thank you in advance
Variable expansion is not allowed to specify artifacts, you have to provide a static value. However, you can store multiple subdirectories under your build directory using wildcards implicitly. Here is an example:
image: microsoft/dotnet:sdk
pipelines:
branches:
master:
- step:
script:
- dotnet build $PROJECT_NAME
- export EnvrBuild=Production_$BITBUCKET_BUILD_NUMBER
- '[ ! -e "$BITBUCKET_CLONE_DIR/$EnvrBuild" ] && mkdir $BITBUCKET_CLONE_DIR/$EnvrBuild'
- dotnet publish $PROJECT_NAME --configuration Release
- mkdir -p $BITBUCKET_CLONE_DIR/build_dir/$EnvrBuild
- cp -r $BITBUCKET_CLONE_DIR/$PROJECT_NAME/bin/Release/netcoreapp2.1/publish/** $BITBUCKET_CLONE_DIR/build_dir/$EnvrBuild
artifacts:
- build_dir/**
- step:
script:
- export EnvrBuild=Production_$BITBUCKET_BUILD_NUMBER
- ls build_dir/$EnvrBuild

Run Google Cloud Build Command for Each Result of Array

I have an Nx workspace with multiple Angular apps included. When master is updated in my GitHub repo, I want a build to kick off. That part is easy enough with GCB's triggers. But what I want to happen is to run this command:
npm run affected:apps
on the trigger, and build a Docker image and push it to Google Container registry for each affected app. My cloudbuild.yaml file looks like this so far:
steps:
- name: 'gcr.io/cloud-builders/git'
args: ['fetch', '--unshallow']
- name: node:10.15.1
entrypoint: npm
args: ['run affected:apps --base=origin/master --head=HEAD']
That command returns a result like this:
> project-name#0.0.0 affected:apps /Users/username/projects/project-folder
> nx affected:apps
Note: Nx defaulted to --base=master --head=HEAD
my-app-1
I'm not sure what to do with Google Cloud with that result. With a node script, I could do the following to print out an array of affected apps:
const { exec } = require('child_process');
function getApps() {
exec('npm run affected:apps', (err, out) => {
if (err) {
console.log(null);
} else {
const lines = out.split('\n');
const apps = lines[lines.length - 2].split(' ');
console.log(JSON.stringify(apps));
}
});
}
getApps();
That returns an array of the affected apps, and null if an error. Still, even with that, I'm not sure what I would do for the next step in Google Cloud build. With the results of either the command or that script, ideally I'd be able to run a docker build command, like this:
docker build --file ./:loop variable:/Dockerfile
where :loop variable: is the name of an affected app. I'd like to do that for each value in the array, and not do anything if, for some reason, the command returns no affected apps.
Any ideas on how to use Google Cloud Build with Nx Workspaces, or if you've just got Google Cloud build experience and know what my next step should be, that'd be great.
Continue #chinoche comment there is an example of how you could save the list of affected apps to the affected.txt file
- name: 'gcr.io/cloud-builders/npm'
entrypoint: 'bash'
args:
- '-c'
- |
IFS=' ' read -a apps <<< $(npx nx affected:apps --base=origin/master~1 --plain)
for app in "${apps[#]}"
do
echo $app
done >> affected.txt
The next step could read the file and call any other commands, e.g. create docker image
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
while IFS= read -r project; do
docker build -t gcr.io/$PROJECT_ID/$project -f <path-to>/Dockerfile .
done < affected.txt
One of the tricks might be to create separate cloudbuild.yaml file for each project and then trigger a new cloud build process for each affected project. That allows having a completely different build process for each project.
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
while IFS= read -r project; do
gcloud builds submit --config ./<path-to>/$project/project-specific-cloudbuild.yaml .
done < affected.txt
if you are able to get the affected apps with the node script I'd suggest you to write a file with the affected apps in a Cloud Build custom steps, this file will be written at the "/workspace" directory and will be able to any other custom step that may be executed in later steps, with this you should be able to run the docker build command

Gitlab-ci: extend script section

I have an unity ci-project.
.gitlab-ci.yml contains base .build job with one script command. Also I have multiple specified jobs for build each platform which extended base .build. I want to execute some platform-specific commands for android, so I have created separated job generate-android-apk. But if it's failing the pipeline will be failed too.(I know about allow_failure). Is it possible to extend script section between jobs without copy-pasting?
UPDATE:
since gitlab 13.9 it is possible to use !reference tags from other jobs or "templates" (which are commented jobs - using dot as prefix)
actual_job:
script:
- echo doing something
.template_job:
after_script:
- echo done with something
job_using_references_from_other_jobs:
script:
- !reference [actual_job, script]
after_script:
- !reference [.template_job, after_script]
Thanks to #amine-zaine for the update
FIRST APPROACH:
You can achieve modular script sections by utilizing 'literal blocks' (using |) like so:
.template1: &template1 |
echo install
.template2: &template2 |
echo bundle
testJob:
script:
- *template1
- *template2
See Source
ANOTHER SOLUTION:
Since GitLab 11.3 it is possible to use extend which could also work for you.
.template:
script: echo test template
stage: testStage
only:
refs:
- branches
rspec:
extends: .template1
after_script:
- echo test job
only:
variables:
- $TestVar
See Docs
More Examples

Resources