Creating json patch with a variable created during azure pipeline runtime - bash

I am setting up the ability to deploy test environments under different kubernetes namespaces. I am running this script in my azure-pipeline with the hope of setting the ingress controller host for the deploy.
- script: |
sanitized_namespace="$(echo "$(Build.SourceBranchName)" | sed -r 's/[^a-z0-9-]//g')"
echo "##vso[task.setvariable variable=sanitized_namespace]$sanitized_namespace"
echo '[{"op": "replace", "path": "/spec/rules/0/host", "value": "'"$sanitized_namespace"'"}]' > host-patch.json
cat ./host-patch.json
name: namespace
displayName: 'Remove invalid characters for test namespace'
However, every time I run this or variations of this the created ./host-patch.json file has an empty string for "value" rather than the value of my variable sanitized_namespace. I know the variable for sanitized namespace is not empty because I use it elsewhere in the same job and it works as expected.
Can anyone else tell me how I might create this json file to patch the ingress host with this variable.
In the grand scheme of things I would like to set it to be something like this
echo '[{"op": "replace", "path": "/spec/rules/0/host", "value": "'"$sanitized_namespace.somedomain.com"'"}]' > host-patch.json

Related

error in gitlab yaml - include and extends

I'm having a gitlab yaml file whose before_scripts section needs to be used in another gitlab yaml. I'm doing something like this:
include:
- remote: 'https://gitlab.xxx.net/awsxxx/job-template/-/blob/master/.gitlab-
ci-template.yml'
extends:
- before_script
The relevant contents of the above url file are:
before_script:
- echo "foo"
- echo "bar"
This is not working, returns that the syntax is incorrect. Can you please help me correct this? Note: There are multiple extends and multiple parent 'include' and so I'm using the '-' format for extends and include here
I guess the error you're getting is because you can only use the extends keyword on a job, see the relavant page https://docs.gitlab.com/ee/ci/yaml/#extends. Are you trying to somehow append the remote yaml with your own before_script ? You should be able to reuse the job name from the remote yaml and do the before script there like:
include:
- remote: 'https://gitlab.xxx.net/awsxxx/job-template/-/blob/master/.gitlab-
ci-template.yml'
job to overwrite from ci-template:
before_script:
- echo "foo"
- echo "bar"

Pass file variable to gitlab job

I am having trouble with dynamically passing one of two file based variables to a job.
I have defined two file variables in my CI/CD settings that contain my helm values for deployments to developement and production clusters. They are typical yaml syntax, their content does not really matter.
baz:
foo: bar
I have also defined two jobs for the deployment that depend on a general deployment template .deploy.
.deploy:
variables:
DEPLOYMENT_NAME: ""
HELM_CHART_NAME: ""
HELM_VALUES: ""
before_script:
- kubectl ...
script:
- helm upgrade $DEPLOYMENT_NAME charts/$HELM_CHART_NAME
--install
--atomic
--debug
-f $HELM_VALUES
The specialization happens in two jobs, one for dev and one for prod.
deploy:dev:
extends: .deploy
variables:
DEPLOYMENT_NAME: my-deployment
HELM_CHART_NAME: my-dev-chart
HELM_VALUES: $DEV_HELM_VALUES # from CI/CD variables
deploy:prod:
extends: .deploy
variables:
DEPLOYMENT_NAME: my-deployment
HELM_CHART_NAME: my-prod-chart
HELM_VALUES: $PROD_HELM_VALUES # from CI/CD variables
The command that fails is the one in the script tag of .deploy. If I pass in the $DEV_HELM_VALUES or $PROD_HELM_VALUES, the deployment is triggered. However if I put in the $HELM_VALUES as described above, the command fails (Error: "helm upgrade" requires 2 arguments, which is very misleading).
The problem is that the $HELM_VALUES that are accessed in the command are already the resolved content of the file, whereas passing the $DEV_HELM_VALUES or the $PROD_HELM_VALUES directly works with the -f syntax.
This can be seen using echo in the job's output:
echo "$DEV_HELM_VALUES"
/builds/my-company/my-deployment.tmp/DEV_HELM_VALUES
echo "$HELM_VALUES"
baz:
foo: bar
How can I make sure the $HELM_VALUES only point to one of the files, and do not contain the files' content?

Bash - Gitlab CI not converting variable to a string

I am using GitLab to deploy a project and have some environmental variables setup in the GitLab console which I use in my GitLab deployment script below:
- export S3_BUCKET="$(eval \$S3_BUCKET_${CI_COMMIT_REF_NAME^^})"
- aws s3 rm s3://$S3_BUCKET --recursive
My environmental variables are declared like so:
Key: s3_bucket_development
Value: https://dev.my-bucket.com
Key: s3_bucket_production
Value: https://prod.my-bucket.com
The plan is that it grabs the bucket URL from the environmental variables depending on which branch is trying to deploy (CI_COMMIT_REF_NAME).
The problem is that the S3_BUCKET variable does not seem to get set properly and I get the following error:
> export S3_BUCKET=$(eval \$S3_BUCKET_${CI_COMMIT_REF_NAME^^})
> /scripts-30283952-2040310190/step_script: line 150: https://dev.my-bucket.com: No such file or directory
It looks like it picks up the environmental variable value fine but does not set it properly - any ideas why?
It seems like you are trying to get the value of the variables S3_BUCKET_DEVELOPMENT and S3_BUCKET_PRODUCTION based on the value of CI_COMMIT_REF_NAME, you can do this by using parameter indirection:
$ a=b
$ b=c
$echo "${!a}" # c
and in your case, you would need a temporary variable as well, something like this might work:
- s3_bucket_variable=S3_BUCKET_${CI_COMMIT_REF_NAME^^}
- s3_bucket=${!s3_bucket_variable}
- aws s3 rm "s3://$s3_bucket" --recursive
You are basically telling bash to execute command, named https://dev.my-bucket.com, which obviously doesn't exist.
Since you want to assign output of command when using VAR=$(command) you should probably use echo
export S3_BUCKET=$(eval echo \$S3_BUCKET_${CI_COMMIT_REF_NAME^^})
Simple test:
VAR=HELL; OUTPUT="$(eval echo "\$S${VAR^^}")"; echo $OUTPUT
/bin/bash
It dynamically creates SHELL variable, and then successfully prints it

GitHub Actions to use variables set from shell

Goal:
In GitHub Actions, to define my commit message dynamically from shell:
- name: Commit changes
uses: EndBug/add-and-commit#v7
with:
message: "added on $(date -I)"
However, it seems that I have to define a environment variable then use it. I'm following How do I set an env var with a bash expression in GitHub Actions? and other help files like this, but still cannot tell how to make use of such environment variable that I've define previously. This is what I tried but failed:
- name: Checkout repo
uses: actions/checkout#v2
- run: |
touch sample.js
echo "today=$(date -I)" >> $GITHUB_ENV
- name: Commit changes
uses: EndBug/add-and-commit#v7
with:
message: "added on ${today}"
How to make it works?
If you want to reference an environment variable set using the $GITHUB_ENV environment file in the arguments to another task, you'll need to use workflow syntax to access the appropriate key of the top level env key, like this:
- name: Commit changes
uses: EndBug/add-and-commit#v7
with:
message: "added on ${{env.today}}"
You can access it as a standard environment from inside of a running task, for example:
- name: Show an environment variable
run: |
echo "today is $today"
In that example, the expression $today is expanded by the shell,
which looks up the environment variable named today. You could also
write:
- name: Show an environment variable
run: |
echo "today is ${{env.today}}"
In this case, the expansion would be performed by github's workflow
engine before the run commands execute, so the shell would see a
literal command that looks like echo "today is 2021-07-14".
You can accomplish something similar using output parameters, like this:
- name: "Set an output parameter"
id: set_today
run: |
echo "::set-output name=today::$(date -I)"
- name: Commit changes
uses: EndBug/add-and-commit#v7
with:
message: "added on ${{steps.set_today.outputs.today}}"
Using output parameters is a little more granular (because they are
qualified by the step id), and they won't show up in the environment
of processes started by your tasks.

Ansible vars_prompt in separate file

Standard variables can be located in separate files by hosts and/or groups.
Eg. group_vars/groupname or host_vars/hostname.
Is it possible to set vars_prompt in any other location than in playbook file?
For example ideally directly in group_vars/groupname or group_vars_prompt/groupname?
Didn't find any relevant documentation.
Thanks
Afaik, you can not do that. At best, you could use a dynamic inventory script that calls read like so:
#!/bin/bash
read foo
cat <<EOF
{
"localhost": {
"hosts": [ "localhost" ]
},
"_meta" : {
"hostvars" : {
"localhost": {
"foo": "$foo"
}
}
}
}
EOF
But since ansible swallows STDOUT and STDERR when executing an inventory script (see: https://github.com/ansible/ansible/blob/devel/lib/ansible/inventory/script.py#L42), you won't be able to show a question prompt whatever file descriptor you write to.
As an alternative, if you're running under X, you could use Zenity:
#!/bin/bash
foo=`zenity --title "Select Host" --entry --text "Enter value for foo"`
cat <<EOF
{
"localhost": {
"hosts": [ "localhost" ]
},
"_meta" : {
"hostvars" : {
"localhost": {
"foo": "$foo"
}
}
}
}
EOF
This way, you'll get a (GUI) prompt.
But I don't think this is desirable anyway, since it can fail in hundred ways. May be you could try an alternate approach, or tell us what you're trying to achieve.
Alternate approaches
use vars files, filled by users or a script
use ansible -e command line options, eventually wrapped in a bash script reading vars, eventually with zenity if you need UI
let users fill inventory files (group_vars/whatever can be a directory, containing multiple files)
use lookup with pipe to read from a script
use lookup with env, to read vars from environment variables
use Ansible Tower with forms
use vars_prompt (falling back to defaults if nothing is entered), because the playbook is probably the best place to do this
...
Any of these solutions is probably better that hacking around inventory which should really be available unattended (because you might run later from Tower, because you might execute from cron, because you might ansible-pull, ...).

Resources