I want to print out all the variables defined in the file (not environment variables), so that I can quickly locate the error. I thought of printing through echo, but this is not friendly, is there any easy way to achieve this?
For example is as follow:
var1=${VAR1:-"test1"}
var2=${VAR2:-"test2"}
var3=${VAR1:-"test3"}
var4=${VAR1:-"test4"}
print like below:
var1=test1
var2=modify // modified by environment var
var3=test3
var4=test4
I really appreciate any help with this.
In Bash you can:
# List all variables to a file named before
declare -p > before
# source the file
. the_file
# list all variables to a file named after
declare -p > after
# difference between variables before and after sourcing the file
diff before after
You can manipulate with env -i bash -c to get a clean environment.
The other way is just to write a parser for your file. Simple sed 's/=.*//' the_file will give you a list of all variable definitions.
Related
I have all my env vars in .env files.
They get automatically loaded when I open my shell-terminal.
I normally render shell environment variables into my target files with envsubst. similar to the example below.
What I search is a solution where I can pass a dotenv-file as well my template-file to a script which outputs the rendered result.
Something like this:
aScript --input .env.production --template template-file.yml --output result.yml
I want to be able to parse different environment variables into my yaml. The output should be sealed via "Sealed secrets" and finally saved in the regarding kustomize folder
envsub.sh .env.staging templates/secrets/backend-secrets.yml | kubeseal -o yaml > kustomize/overlays/staging
I hope you get the idea.
example
.env.production-file:
FOO=bar
PASSWROD=abc
content of template-file.yml
stringData:
foo: $FOO
password: $PASSWORD
Then running this:
envsubst < template-file.yml > file-with-vars.yml
the result is:
stringData:
foo: bar
password: abc
My approach so far does not work because Dotenv also supports different environments like .env, .env.production, .env.staging asf..
What about:
#!/bin/sh
# envsub - subsitute environment variables
env=$1
template=$2
sh -c "
. \"$env\"
cat <<EOF
$(cat "$template")
EOF"
Usage:
./envsub .env.production template-file.yaml > result.yaml
A here-doc with an unquoted delimiter (EOF) expands variables, whilst preserving quotes, backslashes, and other shell sequences.
sh -c is used like eval, to expand the command substitution, then run that output through a here-doc.
Be aware that this extra level of indirection creates potential for code injection, if someone can modify the yaml file.
For example, adding this:
EOF
echo malicous commands
But it does get the result you want.
i'm working on my cicd in gitlab.
I have set up many environment variables, most of which are standard string/numbers. What I've done is to prefix them with "APP_" so that I can properly export to my project during cicd only the required variables. I do it this way:
export | grep APP_ | sed -e 's/APP_//g' | sed -e 's/declare -x //g' > ./app/settings/.env
This will basically take all the environement variables with APP_, remove the APP_, and store all of them in a file in ./app/settings/.env
This works like a charm
Now I'd like to do something similar for my file environment variables. What I've done is creating with a "FILE_" prefix, so I'd like to:
create one file per environement variable starting with "FILE_", naming the file as the environment variable name (but without the prefix FILE_)
store the files in .app/settings/files
How should I do so ?
At the moment i'm doing one by one but this isn't what I'd like:
echo "$FILE_MY_CERTIFICATE" > "./app/settings/files/my_certificate"
P.S. For those experts in gitlab env variables, I'm doing like this because I'm unable to use the standard "file" environement variable feature integrated in gitlab. The variables aren't in my build project so I'd like to find a workaround to suit my needs.
I do it this way:
Forget your code. It's just:
declare -p "${!APP_#}" > ./app/settings/.env
If you really specifically want to really parse something like export, use env -0.
How should I do so ?
declare -p "${!FILE_#}" > ./app/settings/files
I believe this may be a XY question and you should instead rethink your algorithm to use arrays or associative arrays instead.
I have to introduce some templating over text configuration files (yaml, xml, json) that already contain bash-like syntax variables. I need to preserve existing bash-like variables untouched but substitute my ones. List is dynamic, variables should come from environment. Something like simple processor taking "$${MY_VAR}}" pattern but ignoring $MY_VAR. Preferably pure Bash or as small number of tooling required as possible.
Pattern could be $$(VAR) or anything that can be easily separated from ${VAR} and $VAR. The key limitation - it is intended for a docker container startup procedure injecting environment variables into provided service configuration templates and this way building this configuration. So something like Java or even Perl processing is not an option.
Does anybody have a simple approach?
I was using the following bash processing for such variable substitution where original files had no variables. But now I need something one step smarter.
# process input file ($1) placing output into ($2) with shell variables substitution.
process_file() {
set -e
eval "cat <<EOF
$(<$1)
EOF
" | cat > $2
}
Obvious clean solution that is too complex for Docker file because of number of packages needed:
perl -p -i -e 's/\$\{\{([^}]+)\}\}/defined $ENV{$1} ? $ENV{$1} : $&/eg' < test.json
This filters out ${{VAR}}, even better - only set ones.
Im trying to source a variable list which is populated into one single variable in bash.
I then want to source this single variable to the contents (which are other variables) of the variable are available to the script.
I want to achieve this without having to spool the sqlplus file then source this file (this already works as I tried it).
Please find below what Im trying:
#!/bin/bash
var_list=$(sqlplus -S /#mydatabase << EOF
set pagesize 0
set trimspool on
set headsep off
set echo off
set feedback off
set linesize 1000
set verify off
set termout off
select varlist from table;
EOF
)
#This already works when I echo any variable from the list
#echo "$var_list" > var_list.dat
#. var_list.dat
#echo "$var1, $var2, $var3"
#Im trying to achieve the following
. $(echo "var_list")
echo "$any_variable_from_var_list"
The contents of var_list from the database are as follows:
var1="Test1"
var2="Test2"
var3="Test3"
I also tried sourcing it other ways such as:
. <<< $(echo "$var_list")
. $(cat "$var_list")
Im not sure if I need to read in each line now using a while loop.
Any advice is appreciated.
You can:
. /dev/stdin <<<"$varlist"
<<< is a here string. It redirects the content of data behind <<< to standard input.
/dev/stdin represents standard input. So reading from the 0 file descriptor is like opening /dev/stdin and calling read() on resulting file descriptor.
Because source command needs a filename, we pass to is /dev/stdin and redirect the data to be read to standard input. That way source reads the commands from standard input thinking it's reading from file, while we pass our data to the input that we want to pass.
Using /dev/stdin for tools that expect a file is quite common. I have no idea what references to give, I'll link: bash manual here strings, Posix 7 base definitions 2.1.1p4 last bullet point, linux kernel documentation on /dev/ directory entires, bash manual shell builtins, maybe C99 7.19.3p7.
I needed a way to store dotenv values in files locally and vars for DevOps pipelines, so I could then source to the runtime environment on demand (from file when available and vars when not). More though, I needed to store different dotenv sets in different vars and use them based on the source branch (which I load to $ENV in .gitlab-ci.yml via export ENV=${CI_COMMIT_BRANCH:=develop}). With this I'll have developEnv, qaEnv, and productionEnv, each being a var containing it's appropriate dotenv contents (being redundant to be clear.)
unset FOO; # Clear so we can confirm loading after
ENV=develop; #
developEnv="VERSION=1.2.3
FOO=bar"; # Creating a simple dotenv in a var, with linebreak (as the files passed through will have)
envVarName=${ENV}Env # Our dynamic var-name
source <(cat <<< "${!envVarName}") # Using the dynamic name,
echo $FOO;
# bar
I want to run a Bash function that defines a few variables and then, after the function has run, list all of the variables that it has 'attempted' to define. Such a list would include the names of those variables that were pre-existing but were unchanged in value by the function. Could you point me in the right direction on this?
EDIT: I have added some code below both to explain further what it is that I want to accomplish and to offer inspiration.
#!/bin/bash
variable1="zappo"
#listOfVariables1="$(printenv)"
listOfVariables1="$(set)"
function1(){
variable1="zappo"
}
function1
#listOfVariables2="$(printenv)"
listOfVariables2="$(set)"
echo -e "\ncomparing initial list of variables with final list of variables...\n"
diff <(echo "${listOfVariables1}") <(echo "${listOfVariables2}")
The set command will display all shell environmental variables, including those user defined.
To easily the variable list (which can be quite long) before and after a function, you could use diff
e.g.
set > set_before.tmp
my_func
set > set_after.tmp
diff set_before.tmp set_after.tmp
try to edit you bash function it to print environment in the end of execution. Then you can compare (diff list1 list2) list you got with printenv results without your script.
#!/bin/bash
printenv > /tmp/list1
{
cat $(which your_function)
echo 'printenv > /tmp/list2'
} >/tmp/fake.sh
chmod +x /tmp/fake.sh
/tmp/fake.sh
diff /tmp/list1 /tmp/list2
also, if you need just to set this variables, you need to run . $(which your_function)
variables will not be changed in parent scope if you simple run your_function.