yaml use $CI_COMMIT_BRANCH in script - yaml

In Yaml:
I have to create directory like: piblic/branch_name
-then copy there some.html
-then publish it in artifacts
GenerateSomeReport:
...
script:
- mkdir -p public/$CI_COMMIT_BRANCH
- cp -a some.html public/$CI_COMMIT_BRANCH/
rules:
- when: manual
artifacts:
when: always
paths:
- public
What I do wrong? I do not see sub-folder in artifacts, but some.html placed in public/some.html

Related

Git actions branch

So I have two secrets: DEV_SERVER_IP and MASTER_SERVER_IP.
in main.yml I need something like this
run: echo "::set-env name=BRANCH_NAME::$(echo ${GITHUB_REF#refs/heads/} | sed 's/\//_/g')"
run: ssh-keyscan -H ${{ secrets.BRANCH_NAME_SERVER_IP }} >> ~/.ssh/known_hosts
but am getting error
env:
BRANCH_NAME: dev
Error: Input required and not supplied: key
I need here something like this ssh-keyscan -H ${{ secrets.${BRANCH_NAME}_SERVER_IP }}
how can I fix this?
You're trying to use shell style logic inside a Github context
expansion (${{ ... }}) which won't work. Just move all your logic
into your shell script instead:
name: Example
on:
push:
jobs:
example:
runs-on: ubuntu-latest
steps:
- name: get target ip
env:
DEV_SERVER_IP: ${{ secrets.DEV_SERVER_IP }}
MAIN_SERVER_IP: ${{ secrets.MAIN_SERVER_IP }}
run: |
branch_name=$(sed 's|/|_|g' <<< ${GITHUB_REF#refs/heads/})
target="${branch_name^^}_SERVER_IP"
mkdir -p ~/.ssh
ssh-keyscan -H ${!target} >> ~/.ssh/known_hosts
cat ~/.ssh/known_hosts
In the above workflow, the expression ${branch_name^^} is a bash expression that returns the value of $branch_name in uppercase, and ${!target} is a bash expression that returns the value of the variable who name is stored in $target.
Note that I'm not using your "set the BRANCH_NAME environment variable"
task because the ::set-env command is disabled by default for
security reasons.

Azure Pipeline set of tasks for each folder in root DIR of repo

I have a requirement in which
for each top level folder in the azure repo:
print(foldername)
execute an entire set of tasks or stages of tasks around pylint and other various stuff
I am just trying to save the folder names across the whole pipeline but having issue retrieving and saving them...
my yaml file
trigger:
branches:
include: [ '*' ]
pool:
vmImage: ubuntu-latest
stages:
- stage: Gather_Folders
displayName: "Gather Folders"
jobs:
- job: "get_folder_names"
displayName: "Query Repo for folders"
steps:
- bash: echo $MODEL_NAMES
env:
MODEL_NAMES: $(ls -d -- */)
output
Generating script.
Script contents:
echo $MODEL_NAMES
========================== Starting Command Output ===========================
/usr/bin/bash --noprofile --norc /home/vsts/work/_temp/jflsakjfldskjf.sh
$(ls -d -- */)
Finishing: Bash
I checked and the variable just takes the literal command itself instead of its output.. what am I missing here?
I was hoping to inject the folder names into a pipeline variable.. then somehow execute for each folder... a stage or set of stages in parallel
For this issue, Krzysztof Madej gave an answer in this ticket. To get directories in a given folder you can use the following script:
- task: PowerShell#2
displayName: Get all directories of $(Build.SourcesDirectory) and assign to variable
inputs:
targetType: 'inline'
script: |
$arr = Get-ChildItem '$(Build.SourcesDirectory)' |
Where-Object {$_.PSIsContainer} |
Foreach-Object {$_.Name}
echo "##vso[task.setvariable variable=arr;]$arr"
- task: PowerShell#2
displayName: List all directories from variable
inputs:
targetType: 'inline'
script: |
echo '$(arr)'

using for-loop in azure pipeline jobs

I'm gonna use a for-loop which scans the files (value-f1.yaml, values-f2.yaml,...) in a folder and each time use a filename as a varibale and run the job in Azure pipeline job to deploy the helmchart based on that values file. The folder is located in the GitHub repository. So I'm thinking of something like this:
pipeline.yaml
stages:
- stage: Deploy
variables:
azureResourceGroup: ''
kubernetesCluster: ''
subdomain: ''
jobs:
${{ each filename in /myfolder/*.yaml}}:
valueFile: $filename
- template: Templates/deploy-helmchart.yaml#pipelinetemplates
deploy-helmchart.yaml
jobs:
- job: Deploy
pool:
vmImage: 'ubuntu-latest'
steps:
- task: HelmInstaller#1
displayName: 'Installing Helm'
inputs:
helmVersionToInstall: '2.15.1'
condition: and(succeeded(), startsWith(variables['build.sourceBranch'], 'refs/tags/v'))
- task: HelmDeploy#0
displayName: 'Initializing Helm'
inputs:
connectionType: 'Azure Resource Manager'
azureSubscription: $(azureSubscription)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: 'init'
condition: and(succeeded(), startsWith(variables['build.sourceBranch'], 'refs/tags/v'))
- task: PowerShell#2
displayName: 'Fetching GitTag'
inputs:
targetType: 'inline'
script: |
# Write your PowerShell commands here.
Write-Host "Fetching the latest GitTag"
$gt = git describe --abbrev=0
Write-Host "##vso[task.setvariable variable=gittag]$gt"
condition: and(succeeded(), startsWith(variables['build.sourceBranch'], 'refs/tags/v'))
- task: Bash#3
displayName: 'Fetching repo-tag'
inputs:
targetType: 'inline'
script: |
echo GitTag=$(gittag)
echo BuildID=$(Build.BuildId)
echo SourceBranchName=$(Build.SourceBranchName)
echo ClusterName= $(kubernetesCluster)
- task: HelmDeploy#0
displayName: 'Upgrading helmchart'
inputs:
connectionType: 'Azure Resource Manager'
azureSubscription: $(azureSubscription)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: 'upgrade'
chartType: 'FilePath'
chartPath: $(chartPath)
install: true
releaseName: $(releaseName)
valueFile: $(valueFile)
arguments: '--set image.tag=$(gittag) --set subdomain=$(subdomain)'
condition: and(succeeded(), startsWith(variables['build.sourceBranch'], 'refs/tags/v'))
Another thing is that if the jobs can get access to the GitHub repo by default or do I need to do something in the job level?
Besides how can I use for-loop in the job for this case?
Any help would be appreciated.
Updated after getting comments from #Leo
Here is a PowerShell task that I added in deploy-helmchart.yaml for fetching the files from a folder in GitHub.
- task: PowerShell#2
displayName: 'Fetching Files'
inputs:
targetType: 'inline'
script: |
Write-Host "Fetching values files"
cd myfolder
$a=git ls-files
foreach ($i in $a) {
Write-Host "##vso[task.setvariable variable=filename]$i"
Write-Host "printing"$i
}
Now the question is how can I run the task: HelmDeploy#0 for each files using parameters?
if the jobs can get access to the GitHub repo by default or do I need to do something in the job level?
The answer is yes.
We could add a command line task in the jobs, like job1 to clone the GitHub repository by Github PAT, then we could access those files (value-f1.yaml, values-f2.yaml,...) in $(Build.SourcesDirectory):
git clone https://<GithubPAT>#github.com/XXXXX/TestProject.git
Besides how can I use for-loop in the job for this case?
You could create a template which will have a set of actions, and pass parameters across during your build, like:
deploy-helmchart.yaml:
parameters:
param : []
steps:
- ${{each filename in parameters.param}}:
- scripts: 'echo ${{ filename }}'
pipeline.yaml:
steps:
- template: deploy-helmchart.yaml
parameters:
param: ["filaname1","filaname2","filaname3"]
Check the document Solving the looping problem in Azure DevOps Pipelines for some more details.
Command line get the latest file name in the foler:
FOR /F "delims=|" %%I IN ('DIR "$(Build.SourcesDirectory)\*.txt*" /B /O:D') DO SET NewestFile=%%I
echo "##vso[task.setvariable variable=NewFileName]NewestFile"
Update:
Now the question is how can I run the task: HelmDeploy#0 for each
files using parameters?
Its depends on whether your HelmDeploy` task has options to accept the filename parameter.
As I said before, we could use following yaml to invoke the template yaml with parameters:
- template: deploy-helmchart.yaml
parameters:
param: ["filaname1","filaname2","filaname3"]
But, if the task HelmDeploy has no options to accept parameters, we could not run the task HelmDeploy#0 for each files using parameters.
Then I check the HelmDeploy#0, I found there is only one option that can accept Helm command parameters:
So, the answer for this question is depends on whether your file name can be used as a Helm command, if not, you could not run the task HelmDeploy#0 for each files using parameters. If yes, you can do it.
Please check the official document Templates for some more details.
Hope this helps.

Can't share global variable value between jobs in gitlab ci yaml file

I'm trying to build an application using GitLab CI.
The name of the generated file is depending on the time, in this format
DEV_APP_yyyyMMddhhmm
(example: DEV_APP_201810221340, corresponding to the date of today 2018/10/22 13h40).
How can I store this name in a global variable inside the .gitlab-ci.yml file?
Here is my .gitlab-ci.yml file:
image: docker:latest
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
# TIME: ""
# BRANCH: ""
# REC_BUILD_NAME: ""
TIME: "timex"
BRANCH: "branchx"
DEV_BUILD_NAME: "DEV_APP_x"
stages:
- preparation
- build
- package
- deploy
- manual_rec_build
- manual_rec_package
job_preparation:
stage: preparation
script:
- echo ${TIME}
- export TIME=$(date +%Y%m%d%H%M)
- "BRANCH=$(echo $CI_BUILD_REF_SLUG | sed 's/[^[[:alnum:]]/_/g')"
- "DEV_BUILD_NAME=DEV_APP_${BRANCH}_${TIME}"
- echo ${TIME}
maven-build:
image: maven:3-jdk-8
stage: build
script:
- echo ${TIME}
- "mvn package -B"
artifacts:
paths:
- target/*.jar
only:
- merge-requests
- /^feature\/sprint.*$/
- /^DEV_.*$/
# when: manual
docker-build:
stage: package
script:
- echo ${TIME}
- docker build -t registry.gitlab.com/mourad.sellam/actuator-simple .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/mourad.sellam/actuator-simple
only:
- merge-requests
- /^feature\/sprint.*$/
- /^DEV_.*$/
when: manual
k8s-deploy-production:
image: google/cloud-sdk
stage: deploy
script:
- echo ${TIME}
- echo "$GOOGLE_KEY" > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone europe-west1-c
- gcloud config set project actuator-sample
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials actuator-example
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=myUserName--docker-password=$REGISTRY_PASSWD --docker-email=myEmail#gmail.com
- kubectl apply -f deployment.yml --namespace=production
environment:
name: production
url: https://example.production.com
when: manual
job_manual_rec_build:
image: maven:3-jdk-8
stage: manual_rec_build
script:
- echo ${TIME}
- "mvn package -B"
artifacts:
paths:
- target/*.jar
when: manual
# allow_failure: false
job_manual_rec_package:
stage: manual_rec_package
variables:
script:
- echo ${TIME}
- echo ${DEV_BUILD_NAME}
- docker build -t registry.gitlab.com/mourad.sellam/actuator-simple:${DEV_BUILD_NAME} .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/mourad.sellam/actuator-simple
artifacts:
paths:
- target/*.jar
when: on_success
#test 1
When I call
echo ${TIME}
It displays "timex".
echo faild
Could you tell me how to store a global variable and set it in each job?
Check if GitLab 13.0 (May 2020) could help in your case:
Inherit environment variables from other jobs
Passing environment variables (or other data) between CI jobs is now possible.
By using the dependencies keyword (or needs keyword for DAG pipelines), a job can inherit variables from other jobs if they are sourced with dotenv report artifacts.
This offers a more graceful approach for updating variables between jobs compared to artifacts or passing files.
See documentation and issue.
You can inherit environment variables from dependent jobs.
This feature makes use of the artifacts:reports:dotenv report feature.
Example with dependencies keyword.
build:
stage: build
script:
- echo "BUILD_VERSION=hello" >> build.env
artifacts:
reports:
dotenv: build.env
deploy:
stage: deploy
script:
- echo $BUILD_VERSION # => hello
dependencies:
- build
Example with the needs keyword:
build:
stage: build
script:
- echo "BUILD_VERSION=hello" >> build.env
artifacts:
reports:
dotenv: build.env
deploy:
stage: deploy
script:
- echo $BUILD_VERSION # => hello
needs:
- job: build
artifacts: true
You can use artifacts for passing data between jobs. Here's example from Flant to check previous pipeline manual decision:
approve:
script:
- mkdir -p .ci_status
- echo $(date +%s) > .ci_status/approved
artifacts:
paths:
- .ci_status/
NOT approve:
script:
- mkdir -p .ci_status
- echo $(date +%s) > .ci_status/not_approved
artifacts:
paths:
- .ci_status/
deploy to production:
script:
- if [[ $(cat .ci_status/not_approved) > $(cat .ci_status/approved) ]]; then echo "Need approve from release engineer!"; exit 1; fi
- echo "deploy to production!"
There's an open issue 47517 'Pass variables between jobs' on Gitlab CE..
CI/CD often needs to pass information from one job to another and
artifacts can be used for this, although it's a heavy solution with
unintended side effects. Workspaces is another proposal for passing
files between jobs. But sometimes you don't want to pass files at all,
just a small bit of data.
I have faced the same issue, and workaround this by storing DATA in file, then access to it in other Jobs..
You kind of can....The way I went about it:
You can send a POST request to the project to save a variable:
export VARIABLE=secret
curl --request POST --header "PRIVATE-TOKEN: $CI_ACCESS_TOKEN" "https://gitlab.example.com/api/v4/projects/$CI_PROJECT_ID/variables/" --form "key=VARIABLE" --form "value=$VARIABLE"
and cleanup after the work/trigger is finished
curl --request DELETE --header "PRIVATE-TOKEN: $CI_ACCESS_TOKEN" "https://gitlab.seznam.net/api/v4/projects/$CI_PROJECT_ID/variables/VARIABLE"
I'm not sure it suppose to be used this way, but it does the trick. You have the variable accessible for all the following jobs (specially when you use trigger and script in that job is not an option.)
Just please make sure, you run the cleanup job even if previous once fail...

How to extract and reuse common part for this yaml fragment?

In a circleci config.yml file I have a number of jobs defined similarly this way:
defaults: &defaults
working_directory: ~/repo/appengine
docker:
- image: circleci/python
version: 2
jobs:
deploy_uat:
<<: *defaults
steps:
- attach_workspace:
at: ~/repo
- checkout
- run: *setup_secret
- run: *enable_npm
- run: *appengine_dep
- run: *webview_dep
- run: *apps_dep
- run:
name: Setup key file
command: |
mkdir ~/gcloud_keys
echo ${GCLOUD_UAT_ENV_KEY} | base64 --decode --ignore-garbage > ${HOME}/gcloud_keys/uat-env.json
- run: deployt.sh uat
deploy_dev:
# ... Skipped for brevity
deploy_staging:
# ...
I would like to further simplify the yaml code to something like this
defaults: &defaults
working_directory: ~/repo/appengine
docker:
- image: circleci/python
# Common steps
deploy_steps: &deploy_steps
steps:
- attach_workspace:
at: ~/repo
- checkout
- run: *setup_secret
- run: *enable_npm
- run: *appengine_dep
- run: *webview_dep
- run: *apps_dep
version: 2
jobs:
deploy_uat:
<<: *defaults
steps:
*deploy_steps
- run:
name: Setup key file
command: |
mkdir ~/gcloud_keys
echo ${GCLOUD_UAT_ENV_KEY} | base64 --decode --ignore-garbage > ${HOME}/gcloud_keys/uat-env.json
- run: deployt.sh uat
deploy_dev:
<<: *defaults
steps:
*deploy_steps
- run:
name: Setup key file
command: |
mkdir ~/gcloud_keys
echo ${GCLOUD_DEV_ENV_KEY} | base64 --decode --ignore-garbage > ${HOME}/gcloud_keys/dev-env.json
- run: deployt.sh dev
deploy_staging:
<<: *defaults
steps:
*deploy_steps
- run:
name: Setup key file
command: |
mkdir ~/gcloud_keys
echo ${GCLOUD_STAGING_ENV_KEY} | base64 --decode --ignore-garbage > ${HOME}/gcloud_keys/staging-env.json
- run: deployt.sh staging
However if I do it this way, I got did not find expected key error at the line *deploy_steps
If I change it to
deploy_uat:
<<: *defaults
steps:
<<: *deploy_steps
# ...
I got the same error
What is the right way to write simpler yaml config?
Well, the value of steps is expected to be an array. In the first case, there is an alias which points to a mapping (containing a steps key), followed by two sequence items. This is not a valid YAML structure and won't even get past the parser.
In the second case, you are using the (deprecated) merge key. That is only defined for mappings, there is no equivalent for sequences.
What you want to do is to merge two sequences inside YAML. There is no way to do that as YAML is not a programming language and does not support transformations on input data (apart from the merge key, which current YAML devs agree was a bad idea from the start).
Since YAML does not allow you to do what you want, you can turn to templating languages like Jinja, which is what Ansible and SaltStack do to enable doing such things in their YAML configs. Since CircleCI does not support it, you'd need to write yourself a script to transform your input YAML into the version CircleCI understands. It's up to you whether this is a feasible solution to your problem.

Resources