Concourse: vars vs params - continuous-integration

Concourse provides two concepts: vars and params. Both of them can be used to pass some values inside of a task.
params:
COMMAND: deployment.deploy
vars:
command: deployment.deploy
When should use params? is there any rule of thumb?

params are an optional configuration of the task schema. tasks#params
A key-value mapping of string keys and values that are exposed to the task via environment variables.
task: env
config:
platform: linux
image_resource:
type: registry-image
source:
repository: alpine
params:
VALUE: something
run:
path: sh
args:
- -c
- echo $VALUE
vars can be used in most parts of the task configuration. vars
Aside from credentials, vars may also be used for generic parameterization of pipeline configuration templates, allowing a single pipeline config file to be configured multiple times with different parameters - e.g. ((branch_name)).
task: env
config:
platform: linux
image_resource:
type: registry-image
source:
repository: my-image
tag: ((branch_name))
params:
BRANCH_NAME: ((branch_name))
run:
path: sh
args:
- -c
- echo $BRANCH_NAME; echo 'can also be interpolated here: ((branch_name))'

Related

Add global variables with node commands inside

I am trying to add one global variable to be accessible by all jobs inside my gitlab.yml file:
variables:
VERSION: $(node -p "require('./package.json').version")
Which is meant to fetch the node version of the package.json file but the issue here is that when I try to access the variable $VERSION, it does nothing but print the value as a string and not implement it in the release job below:
release-job:
stage: release
image: registry.gitlab.com/gitlab-org/release-cli:latest
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
script:
- echo "running release-job for $TAG"
release:
tag_name: 'v0.$CI_PIPELINE_IID'
description: 'This is the new version: v0.$CI_PIPELINE_IID and
package.json = $VERSION'
ref: '$CI_COMMIT_SHA' # The tag is created from the pipeline SHA.
tags:
- SharedRunner
Any help?
Thanks in advance.

Can't checkout the same repo multiple times in a pipeline

I have self-hosted agents on multiple environments that I am trying to run the same build/deploy processes on. I would like to be able to deploy the same code from a single repo to multiple systems concurrently. Thus, I have created an "overhead" pipeline, and several "processes" pipeline templates. Everything seems to be going very well, except for when I try to perform checkouts of the same repo twice in the same pipeline execution. I get the following error:
An error occurred while loading the YAML build pipeline. An item with the same key has already been added.
I would really like to be able to just click ONE button to trigger a main pipeline that calls all the templates requires and gives the parameters needed to get all my jobs done at once. I could of course define this "overhead" pipeline and then queue up as many instances as I need of it per systems that I need to deploy to, but I'm lazy, hence why I'm using pipelines!
As soon as I remove the checkout from Common.yml, the validation succeeds without any issues. If I keep the checkout in there but only call the Common.yml once for the entire Overhead pipeline, then it succeeds without any issues as well. But the problem is: I need to pull the contents of the repo to EACH of my agents that are running on completely separate environments that are in no way ever able to talk to each other (can't pull the information to one agent and have it do some sort of a "copy" to all the other agent locations.....).
Any assistance is very much welcomed, thank you!
The following is my "overhead" pipeline:
# azure-pipelines.yml
trigger:
none
parameters:
- name: vLAN
type: string
default: 851
values:
- 851
- 1105
stages:
- stage: vLAN851
condition: eq('${{ parameters.vLAN }}', '851')
pool:
name: xxxxx
demands:
- vLAN -equals 851
jobs:
- job: Common_851
steps:
- template: Procedures/Common.yml
- job: Export_851
dependsOn: Common_851
steps:
- template: Procedures/Export.yml
parameters:
Server: ABTS-01
- stage: vLAN1105
condition: eq('${{ parameters.vLAN }}', '1105')
pool:
name: xxxxx
demands:
- vLAN -equals 1105
jobs:
- job: Common_1105
steps:
- template: Procedures/Common.yml
- job: Export_1105
dependsOn: Common_1105
steps:
- template: Procedures/Export.yml
parameters:
Server: OTS-01
And here is the "Procedures/Common.yml":
steps:
- checkout: git://xxxxx/yyyyy#$(Build.SourceBranchName)
clean: true
enabled: true
timeoutInMinutes: 1
- task: UsePythonVersion#0
enabled: true
timeoutInMinutes: 1
displayName: Select correct version of Python
inputs:
versionSpec: '3.8'
addToPath: true
architecture: 'x64'
- task: CmdLine#2
enabled: true
timeoutInMinutes: 5
displayName: Ensure Python Requirements Installed
inputs:
script: |
python -m pip install GitPython
And here is the "Procedures/Export.yml":
parameters:
- name: Server
type: string
steps:
- task: PythonScript#0
enabled: true
timeoutInMinutes: 3
displayName: xxxxx
inputs:
arguments: --name "xxxxx" --mode True --Server ${{ parameters.Server }}
scriptSource: 'filePath'
scriptPath: 'xxxxx/main.py'
I managed to make checkout work with variable branch names by using template expression variables ${{ ... }} instead of macro syntax $(...) variables.
The difference is that, template expressions are processed at compile time while macros are processed at runtime.
So in my case I have something like:
- checkout: git://xxx/yyy#${{ variables.BRANCH_NAME }}
For more information about variables syntax :
Understand variable syntax
I couldn't get it to work with expressions but I was able to get it to work using repository resources following the documentation at: https://learn.microsoft.com/en-us/azure/devops/pipelines/repos/multi-repo-checkout?view=azure-devops
resources:
repositories:
- repository: MyGitHubRepo # The name used to reference this repository in the checkout step
type: git
name: MyAzureProjectName/MyGitRepo
ref: $(Build.SourceBranch)
trigger:
- main
pool:
vmImage: 'ubuntu-latest'
#some job
steps:
- checkout: MyGitHubRepo
#some other job
steps:
- checkout: MyGitHubRepo
- script: dir $(Build.SourcesDirectory)

How to specify different environment variables for each stage type in AWS-SAM template.yml

I wish to set different environment variables for each build stage in my template.yml
I will imagine something like this:
Globals:
Function:
Environment:
Variables:
SomeHost: x.amazonaws.com
DBName: somename
DBPort: 5430
DBUsername: ${var1}
API_BASE_URL: ${var2}
Parameters:
paramEnvironment:
Type: String
AllowedValues:
- stage
- prod
Default: stage
Where I can set
stage:
var1: user1
API_BASE_URL: https://baseurl1.com
prod:
var1: user2
API_BASE_URL: https://baseurl2.com
 And when I run deploy with paramEnvironment the environment will get all the stage based variables
The ideal way of handling different environments (especially for serverless) is having multiple AWS accounts under the same organization(AWS Organizations) where you have a separate account for staging and another account for production.
So in case if you want to solve this within a single account, you can either use SSM parameter or Mappings in SAM template.
Referencing existing system manager param
Parameters:
ApiBaseUrl :
Type : 'AWS::SSM::Parameter::Value<String>'
Default: /${paramEnvironment}/apiBaseUrl
By mappings in SAM template
Mappings:
EnvVariables:
stage:
var1: user1
API_BASE_URL: https://baseurl1.com
prod:
var1: user2
API_BASE_URL: https://baseurl2.com
and refer the map using,
!FindInMap [ EnvVariables, ${paramEnvironment}, API_BASE_URL ]

Difference between PUT and OUTPUT steps in Concourse

Could someone tell me the difference between the PUT step and the OUTPUT step in Concourse? For example, in the following type of YAML files why do we need a put step after a get? Can't we use output instead of put? If not what are the purposes of each two?
jobs:
- name: PR-Test
plan:
- get: some-git-pull-request
trigger: true
- put: some-git-pull-request
params:
context: tests
path: some-git-pull-request
status: pending
....
<- some more code to build ->
....
The purpose of a PUT step is to push to the given resource while an OUTPUT is the result of TASK step.
A task can configure outputs to produce artifacts that can then be propagated to either a put step or to another task step in the same plan.
This means that you send the resource that you are specifying on the GET step to the task as an input, to perform wherever build or scripts executions and the output of that task
is a modified resource that you can later pass to your put step or to another TASK if you don't want to use PUT.
It would also depend on the nature of the defined resource in your pipeline. I'm assuming that you have a git type resource like this:
resources:
- name: some-git-pull-request
type: git
source:
branch: ((credentials.git.branch))
uri: ((credentials.git.uri))
username: ((credentials.git.username))
password: ((credentials.git.pass))
If this is true, the GET step will pull that repo so you can use it as an input for your tasks and if you use PUT against that same resource as you are describing in your sample code, that will push changes to your repo.
Really it depends on the workflow that you want to write but to give an idea it would look something like this:
jobs:
- name: PR-Test
plan:
- get: some-git-pull-request
trigger: true
- task: test-code
config:
platform: linux
image_resource:
type: docker-image
source:
repository: yourRepo/yourImage
tag: latest
inputs:
- name: some-git-pull-request
run:
path: bash
args:
- -exc
- |
cd theNameOfYourRepo
npm install -g mocha
npm test
outputs:
- name: some-git-pull-request-output
Then you can use it on either a PUT
- put: myCloud
params:
manifest: some-git-pull-request-output/manifest.yml
path: some-git-pull-request-output
or a another task whitin the same plan
- task: build-code
config:
platform: linux
image_resource:
type: docker-image
source:
repository: yourRepo/yourImage
tag: latest
inputs:
- name: some-git-pull-request-output
run:
path: bash
args:
- -exc
- |
cd some-git-pull-request-output/
npm install
gulp build
outputs:
- name: your-code-build-output
Hope it helps!

Share variables between steps in drone.io

It seems to me that drone.io does not share parameters across pipeline steps.
Is it possible to read the parameters for the plugins from a file, e.g. a directive
like "from_file" similar to the already existing "from_secret"? This is how one could use it:
kind: pipeline
name: default
steps:
- name: get_repo_name
image: alpine
commands:
- echo "hello" > .repo_name
- name: docker
image: plugins/docker
settings:
repo:
from_file: .repo_name
username:
from_secret: docker_username
password:
from_secret: docker_password
Ability to read input from a file is more the choice of the plugin author, but creating plugins is a pretty simplistic thing as most of your variables just have to be called in as PLUGIN_VARIABLE and you can then offer such things.
https://docs.drone.io/plugins/bash/
To show that some of the plugins do read from file, one such example is drone-github-comment:
steps:
- name: github-comment
image: jmccann/drone-github-comment:1.2
settings:
message_file: file_name.txt
when:
status:
- success
- failure
FWIW, looking at your example though, it would seem you are looking to pass just the repo_name? These variables are all present in a pipeline depending of course on the runner you are using, but for Docker you get all of these:
https://docs.drone.io/pipeline/environment/reference/

Resources