How to read a yaml or env file using bash - bash

I have a .yml file and it has content like below:
env_values:
stage: build
before_script: some-value
So I need to read and get the values for stage and before_script, so they will have build and some-value respectively.
Is there a good easy workaround in bash scripting to read this file and get the values?

Assume your .yml file is t.yml, this Bash script gets the 2 values you need:
arr=($(cat t.yml | grep "stage:"))
stage=${arr[1]}
echo $stage
arr=($(cat t.yml | grep "before_script:"))
before_script=${arr[1]}
echo $before_script

Related

Gitlab CI: save command in variable

I need to run big command in several jobs and save results in dynamically created variables.
My idea - save such command as variable and evaluate it in script sections of all jobs.
For example:
.grep_command: &grep_command
GREP_COMMAND: dotnet ef migrations list | grep "VERY_LONG_PATTERN_HERE"
job1:
variables:
<<: *grep_command
script:
# some job specific code
- echo $GREP_COMMAND
- VAR=$(${GREP_COMMAND}) # doesn't work
job2:
variables:
<<: *grep_command
script:
# some job specific code
- echo $GREP_COMMAND
- echo "VAR=$(${GREP_COMMAND})" > build.env # also doesn't work
I found the right way:
define command as script command and use it in script section, not variables:
.grep-command: &grep-command
- dotnet ef migrations list | grep "VERY_LONG_PATTERN_HERE"
job1:
script:
# some job specific code
- *grep-command
(By the way saving command as variable also works, just use it carefully, but I suppose it is not so clear - variables must stay variables, and commands - as commands. I find it bad practice to mix them)

Passing bash script variables to .yml file for use as child and subdirectories

I'm trying to write a bash script test-bash.sh that will pass arguments from a txt file test-text.txt to a .yml file test-yaml.yml, to be used as child and subdirectory arguments (by that I mean e.g. /path/to/XXXX/child/ and /path/to/XXXX where XXXX is the argument passed from the .sh script.
I'm really struggling to wrap my head around a way to do this so here is a small example of what I'm trying to achieve:
test-text.txt:
folder1
folder2
test-yaml.yml:
general:
XXXX: argument_from_bash_script
rawdatadir: '/some/data/directory/XXXX'
input: '/my/input/directory/XXXX/input'
output: '/my/output/directory/XXXX/output'
test-bash.sh:
#!/bin/bash
FILENAME="test-text.txt"
FOLDERS=$(cat $FILENAME)
for FOLDER in $FOLDERS
do
~ pass FOLDER to .yml file as argument ~
~ run stuff using edited .yml file ~
done
Where the code enclosed in '~' symbols is pseudo code.
I've found this page on using export, would this work in the looping case above or am I barking up the wrong tree with this?
Does envsubst solve your problem?
For example, if I have a test-yaml.yml that contains $foo:
cat test-yaml.yml
output:
general:
$foo: argument_from_bash_script
rawdatadir: '/some/data/directory/$foo'
input: '/my/input/directory/$foo/input'
output: '/my/output/directory/$foo/output'
You can replace $foo inside test-yaml.yml with shell variable $foo by envsubst:
export foo=123
envsubst < test-yaml.yml
output:
general:
123: argument_from_bash_script
rawdatadir: '/some/data/directory/123'
input: '/my/input/directory/123/input'
output: '/my/output/directory/123/output'

How to output variables and values defined in a shell file

I want to print out all the variables defined in the file (not environment variables), so that I can quickly locate the error. I thought of printing through echo, but this is not friendly, is there any easy way to achieve this?
For example is as follow:
var1=${VAR1:-"test1"}
var2=${VAR2:-"test2"}
var3=${VAR1:-"test3"}
var4=${VAR1:-"test4"}
print like below:
var1=test1
var2=modify // modified by environment var
var3=test3
var4=test4
I really appreciate any help with this.
In Bash you can:
# List all variables to a file named before
declare -p > before
# source the file
. the_file
# list all variables to a file named after
declare -p > after
# difference between variables before and after sourcing the file
diff before after
You can manipulate with env -i bash -c to get a clean environment.
The other way is just to write a parser for your file. Simple sed 's/=.*//' the_file will give you a list of all variable definitions.

How to render variables into a target file from differnet dotenv environment files like envsubst

I have all my env vars in .env files.
They get automatically loaded when I open my shell-terminal.
I normally render shell environment variables into my target files with envsubst. similar to the example below.
What I search is a solution where I can pass a dotenv-file as well my template-file to a script which outputs the rendered result.
Something like this:
aScript --input .env.production --template template-file.yml --output result.yml
I want to be able to parse different environment variables into my yaml. The output should be sealed via "Sealed secrets" and finally saved in the regarding kustomize folder
envsub.sh .env.staging templates/secrets/backend-secrets.yml | kubeseal -o yaml > kustomize/overlays/staging
I hope you get the idea.
example
.env.production-file:
FOO=bar
PASSWROD=abc
content of template-file.yml
stringData:
foo: $FOO
password: $PASSWORD
Then running this:
envsubst < template-file.yml > file-with-vars.yml
the result is:
stringData:
foo: bar
password: abc
My approach so far does not work because Dotenv also supports different environments like .env, .env.production, .env.staging asf..
What about:
#!/bin/sh
# envsub - subsitute environment variables
env=$1
template=$2
sh -c "
. \"$env\"
cat <<EOF
$(cat "$template")
EOF"
Usage:
./envsub .env.production template-file.yaml > result.yaml
A here-doc with an unquoted delimiter (EOF) expands variables, whilst preserving quotes, backslashes, and other shell sequences.
sh -c is used like eval, to expand the command substitution, then run that output through a here-doc.
Be aware that this extra level of indirection creates potential for code injection, if someone can modify the yaml file.
For example, adding this:
EOF
echo malicous commands
But it does get the result you want.

Set gitlab-ci.yml variable via bash command

variables:
CUSTOM_NODE_VERSION: '$${cat .nvmrc}'
I'd like for the variable CUSTOM_NODE_VERSION to be populated via the contents of the .nvmrc file (which is located in the projects root directory). How does one do this in the gitlab-ci.yml file?
The example above isn't working. I've also tried the following:
CUSTOM_NODE_VERSION: $(cat .nvmrc) -> (cat .nvmrc)
CUSTOM_NODE_VERSION: "$(cat .nvmrc)" -> (cat .nvmrc)
CUSTOM_NODE_VERSION: '$(cat .nvmrc)' -> (cat .nvmrc)
CUSTOM_NODE_VERSION: ${cat .nvmrc} -> (empty string)
CUSTOM_NODE_VERSION: '${cat .nvmrc}' -> (empty string)
CUSTOM_NODE_VERSION: "${cat .nvmrc}" -> (empty string)
It works if I put it in the before_script like the following:
before_script:
- CUSTOM_NODE_VERSION=$(cat .nvmrc)
But it isn't accessible to the following part of the gitlab-ci.yml file:
lint:
stage: Test
image: node:$CUSTOM_NODE_VERSION
I also wanted to use a version string in a .gitlab-ci.yml file but for appending it to a Docker image name. I did it like this:
build:
stage: build_images
script:
- API_VERSION=v$(grep -E -o "(version = )(.*)" pyproject.toml | cut -d\" -f2)
- echo $API_VERSION
# Build and push new images for staging.
- docker pull $API_STAGING:latest
- docker build --cache-from $API_STAGING:latest >-
-t $API_STAGING:latest >-
-t $API_STAGING:$CI_COMMIT_SHORT_SHA >-
-t $API_STAGING:$API_VERSION >-
-f dockerfiles/Dockerfile.staging .
- docker push $API_STAGING
tags:
- build
The key line here is API_VERSION=v$(grep -E -o "(version = )(.*)" pyproject.toml | cut -d\" -f2).
Explanation: the string I'm trying to read in pyproject.toml is something like version = "0.17.1", and the result I wanted was v0.17.1
v is just a string I want to prepend to my version number
-E (--extended-regexp) invokes grep as egrep; allows use of special regexp characters
-o (--only-matching) doesn't make a difference in my use-case, but might be helpful in other cases (I'm not sure)
(version = ) and (.*): the two capture groups; the latter one captures anything after the space after the equal sign
Running just $ grep -E -o "(version = )(.*)" pyproject.toml will result in version = "0.1.0", so I'm not using the capture groups; instead, I'm using cut
cut "cut[s] out selected portions of each line of a file"
-d\" sets the delimiter to a double-quote instead of the default (tab)
-f2 specifies the fields to return; a value of 1 would return everything before the first quote, i.e., version = , so 2 returns everything before the second quote and after the first, and 3 returns nothing in this example since there is no third double-quote-separated field
echo $API_VERSION just to see that it's working
There are some parts of .gitlab-ci.yml where variables are usable and some parts where they are not.
The .yml file is parsed in Gitlab itself and then the commands are executed by the runner. So setting a variable that is used in the job config is not possible at this point. You could use a pre-defined Secret Variable although that doesnot seem to fix your need.
There are issues tracking the documentation of what you can and cannot do:
Variables in docker service alias not supported
Document the list of places where you are allowed to use an env variable in .gitlab-ci.yml
You might want to try:
before_script:
- export CUSTOM_NODE_VERSION=$(cat .nvmrc)
In your script to make the variable available to subsequent shells.

Resources