In Gitlab-ci.yml How to delete mapped drive if exists else not - gitlab-ci.yml

stages:
test
setup:
tags:
- xyz
stage: test
script:
- "net use Z: /delete"
how to put if condition to handle "if drive exists then delete"

Related

yaml use $CI_COMMIT_BRANCH in script

In Yaml:
I have to create directory like: piblic/branch_name
-then copy there some.html
-then publish it in artifacts
GenerateSomeReport:
...
script:
- mkdir -p public/$CI_COMMIT_BRANCH
- cp -a some.html public/$CI_COMMIT_BRANCH/
rules:
- when: manual
artifacts:
when: always
paths:
- public
What I do wrong? I do not see sub-folder in artifacts, but some.html placed in public/some.html

Can CircleCI run a conditional step even on failure

I'm attempting to upload test results as a separate step from executing tests even when one of the tests fails. But I only want to upload results from specific branches.
Can when: be used this way? Is there a better alternative?
- when: always
condition:
matches:
pattern: '^dev$'
value: << pipeline.git.branch >>
steps:
- run:
name: Upload Test Results
command: <code here>
The code above results in a build error: Unable to parse YAMLmapping values are not allowed here in 'string', condition:
https://circleci.com/docs/2.0/configuration-reference/#the-when-attribute
It can, I was just using the wrong syntax.
- when:
condition:
matches:
pattern: '^dev$'
value: << pipeline.git.branch >>
steps:
- run:
name: Upload Test Results
when: always
command: <code here>

using for-loop in azure pipeline jobs

I'm gonna use a for-loop which scans the files (value-f1.yaml, values-f2.yaml,...) in a folder and each time use a filename as a varibale and run the job in Azure pipeline job to deploy the helmchart based on that values file. The folder is located in the GitHub repository. So I'm thinking of something like this:
pipeline.yaml
stages:
- stage: Deploy
variables:
azureResourceGroup: ''
kubernetesCluster: ''
subdomain: ''
jobs:
${{ each filename in /myfolder/*.yaml}}:
valueFile: $filename
- template: Templates/deploy-helmchart.yaml#pipelinetemplates
deploy-helmchart.yaml
jobs:
- job: Deploy
pool:
vmImage: 'ubuntu-latest'
steps:
- task: HelmInstaller#1
displayName: 'Installing Helm'
inputs:
helmVersionToInstall: '2.15.1'
condition: and(succeeded(), startsWith(variables['build.sourceBranch'], 'refs/tags/v'))
- task: HelmDeploy#0
displayName: 'Initializing Helm'
inputs:
connectionType: 'Azure Resource Manager'
azureSubscription: $(azureSubscription)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: 'init'
condition: and(succeeded(), startsWith(variables['build.sourceBranch'], 'refs/tags/v'))
- task: PowerShell#2
displayName: 'Fetching GitTag'
inputs:
targetType: 'inline'
script: |
# Write your PowerShell commands here.
Write-Host "Fetching the latest GitTag"
$gt = git describe --abbrev=0
Write-Host "##vso[task.setvariable variable=gittag]$gt"
condition: and(succeeded(), startsWith(variables['build.sourceBranch'], 'refs/tags/v'))
- task: Bash#3
displayName: 'Fetching repo-tag'
inputs:
targetType: 'inline'
script: |
echo GitTag=$(gittag)
echo BuildID=$(Build.BuildId)
echo SourceBranchName=$(Build.SourceBranchName)
echo ClusterName= $(kubernetesCluster)
- task: HelmDeploy#0
displayName: 'Upgrading helmchart'
inputs:
connectionType: 'Azure Resource Manager'
azureSubscription: $(azureSubscription)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: 'upgrade'
chartType: 'FilePath'
chartPath: $(chartPath)
install: true
releaseName: $(releaseName)
valueFile: $(valueFile)
arguments: '--set image.tag=$(gittag) --set subdomain=$(subdomain)'
condition: and(succeeded(), startsWith(variables['build.sourceBranch'], 'refs/tags/v'))
Another thing is that if the jobs can get access to the GitHub repo by default or do I need to do something in the job level?
Besides how can I use for-loop in the job for this case?
Any help would be appreciated.
Updated after getting comments from #Leo
Here is a PowerShell task that I added in deploy-helmchart.yaml for fetching the files from a folder in GitHub.
- task: PowerShell#2
displayName: 'Fetching Files'
inputs:
targetType: 'inline'
script: |
Write-Host "Fetching values files"
cd myfolder
$a=git ls-files
foreach ($i in $a) {
Write-Host "##vso[task.setvariable variable=filename]$i"
Write-Host "printing"$i
}
Now the question is how can I run the task: HelmDeploy#0 for each files using parameters?
if the jobs can get access to the GitHub repo by default or do I need to do something in the job level?
The answer is yes.
We could add a command line task in the jobs, like job1 to clone the GitHub repository by Github PAT, then we could access those files (value-f1.yaml, values-f2.yaml,...) in $(Build.SourcesDirectory):
git clone https://<GithubPAT>#github.com/XXXXX/TestProject.git
Besides how can I use for-loop in the job for this case?
You could create a template which will have a set of actions, and pass parameters across during your build, like:
deploy-helmchart.yaml:
parameters:
param : []
steps:
- ${{each filename in parameters.param}}:
- scripts: 'echo ${{ filename }}'
pipeline.yaml:
steps:
- template: deploy-helmchart.yaml
parameters:
param: ["filaname1","filaname2","filaname3"]
Check the document Solving the looping problem in Azure DevOps Pipelines for some more details.
Command line get the latest file name in the foler:
FOR /F "delims=|" %%I IN ('DIR "$(Build.SourcesDirectory)\*.txt*" /B /O:D') DO SET NewestFile=%%I
echo "##vso[task.setvariable variable=NewFileName]NewestFile"
Update:
Now the question is how can I run the task: HelmDeploy#0 for each
files using parameters?
Its depends on whether your HelmDeploy` task has options to accept the filename parameter.
As I said before, we could use following yaml to invoke the template yaml with parameters:
- template: deploy-helmchart.yaml
parameters:
param: ["filaname1","filaname2","filaname3"]
But, if the task HelmDeploy has no options to accept parameters, we could not run the task HelmDeploy#0 for each files using parameters.
Then I check the HelmDeploy#0, I found there is only one option that can accept Helm command parameters:
So, the answer for this question is depends on whether your file name can be used as a Helm command, if not, you could not run the task HelmDeploy#0 for each files using parameters. If yes, you can do it.
Please check the official document Templates for some more details.
Hope this helps.

How to create a manual trigger for multiple parallel jobs

I'd like to have a blocking manual action that will trigger multiple parallel jobs in the next stage. How can I achieve this? For example:
deploy-int runs on merge
The pipeline waits on a single manual trigger
deploy-prd-1 and deploy-prd-2 run in parallel
Here's what I've tried:
1
stages:
- deploy-dev
- deploy-prd-1
- deploy-prd-2
deploy-dev:
stage: deploy-dev
script:
- echo deploy-dev
deploy-prd-1:
when: manual
allow_failure: false
stage: deploy-prd-1
script:
- echo deploy-prd-1
deploy-prd-2:
stage: deploy-prd-2
script:
- echo deploy-prd-2
This achieves 1 and 2, but fails on 3, as deploy-prd-1 and deploy-prd-2 are in series, not parallel
2
stages:
- deploy-dev
- deploy-prd
deploy-dev:
stage: deploy-dev
script:
- echo deploy-dev
deploy-prd-1:
when: manual
allow_failure: false
stage: deploy-prd
script:
- echo deploy-prd-1
deploy-prd-2:
stage: deploy-prd
script:
- echo deploy-prd-2
This fails on 2, as deploy-prd-2 will run automatically without waiting on the manual trigger
3
stages:
- deploy-dev
- deploy-prd
deploy-dev:
stage: deploy-dev
script:
- echo deploy-dev
deploy-prd-1:
when: manual
allow_failure: false
stage: deploy-prd
script:
- echo deploy-prd-1
deploy-prd-2:
when: manual
allow_failure: false
stage: deploy-prd
script:
- echo deploy-prd-2
This isn't ideal as it requires manually triggering each parallel job separately.
4
stages:
- deploy-dev
- approve
- deploy-prd
deploy-dev:
stage: deploy-dev
script:
- echo deploy-dev
approve:
when: manual
allow_failure: false
deploy-prd-1:
stage: deploy-prd
script:
- echo deploy-prd-1
deploy-prd-2:
stage: deploy-prd
script:
- echo deploy-prd-2
This is my attempt to set up a manual "gate", but gitlab rejects the file with Error: jobs:approve script can't be blank.
I can set a no-op like script: [":"], but now gitlab will spin up a container to do nothing, which wastes time and resources.
I’m assuming the reason you want a manual gate is because you want to do some testing before you publish your app.
Instead of doing everything in a single go, a change in mindset might work better.
Use the gitlab ci rules keyword to make each of the two groups of jobs to run on their own schedule (these can have their times turned off, so they become a manual trigger requiring you to push a button).
Then break your ci down, where you build and deploy to your dev environment first. Do your tests and then when you are happy trigger the deploy to prod jobs.

Is it possible to change Gitlab CI variable value after pipeline has started?

I'm trying to create a dynamic gitlab pipeline based on it's own execution progress. For example I have 2 environments and deployment to each of them will be enabled/disabled based on the execution of the script in before_script. It doesn't work for me, seems that pipeline variable value can't be changed after pipeline has started. Any suggestions? (please see my gitlab-ci.yml below)
variables:
RELEASE: limited
stages:
- build
- deploy
before_script:
- export RELEASE=${check-release-type-dynamically.sh}
build1:
stage: build
script:
- echo "Do your build here"
## DEPLOYMENT
deploy_production_ga:
stage: update_prod_env
script:
- echo "deploy environment for all customers"
allow_failure: false
only:
- branches
only:
variables:
- $RELEASE == "general_availability"
deploy_production_limited:
stage: update_prod_env
script:
- echo "deploy environment for limited customers"
allow_failure: false
only:
- branches
only:
variables:
- $RELEASE == "limited"
Variables can't be evaluated in the definition. If you really want to use a shell script to decide what get's deployed you could use an bash if clause:
stages:
- build
- update_prod_env
build1:
stage: build
script:
- echo "Do your build here"
deploy_production_ga:
stage: update_prod_env
script:
- if [ "$(./check-release-type-dynamically.sh)" == "general_availability" ]; then
echo "deploy environment for all customers"
fi
only:
- branches
deploy_production_limited:
stage: update_prod_env
script:
- if [ "$(./check-release-type-dynamically.sh)" == "limited" ]; then
echo "deploy environment for all customers"
fi
only:
- branches
However this is really bad design. Both jobs will get executed on every commit, but only one will do something. It is better to distinguish them by branch. Only commit things to the branch you want to deploy to:
stages:
- build
- update_prod_env
build1:
stage: build
script:
- echo "Do your build here"
deploy_production_ga:
stage: update_prod_env
script:
- echo "deploy environment for all customers"
only:
- branches-general_availability
deploy_production_limited:
stage: update_prod_env
script:
- echo "deploy environment for all customers"
only:
- branches-limited
This way only the build job gets executed that you want to be executed.
A couple of other things I noticed:
export RELEASE=${check-release-type-dynamically.sh} use () instead of {} for subshells. Also if the shell script is in the same directory you must prepend an ./. It should look like:
export RELEASE=$(./check-release-type-dynamically.sh)
allow_failure: false this is the default in gitlab-ci and not necessary.
variables:
- $RELEASE == "general_availability"
Wrong syntax for variables, use:
variables:
VARIABLE_NAME: "Value of Variable"
Have a look at https://docs.gitlab.com/ee/ci/yaml/

Resources