I have a variable AAA which is in this format
AAA='BBB=1 CCC=2 DDD=3'
How can I use this to set environment variables BBB, CCC and DDD in a command I run (without permanently exporting them to the shell)? That is, I want to use to use the above to do something identical to:
# this works correctly: node is run with AAA, BBB, and CCC in its environment
BBB=1 CCC=2 DDD=3 node index.js
...however, when I try:
# this does not work: AAA=1 is run as a command, so it causes "command not found"
$AAA node index.js
...it tries to run BBB=1 as a command, instead of parsing the assignment as a variable to set in node's environment.
If you can, use a different format.
There are several better options:
An array.
envvars=( AAA=1 BBB=2 CCC=3 )
env "${envvars[#]}" node.js index.js
A NUL-delimited stream (the ideal format to use to save environment variables in a file -- this is the format your operating system uses for /proc/self/environ, for example).
Saving to a file:
```
printf '%s\0' 'foo=bar' \
'baz=qux' \
$'evil=$(rm -rf importantDir)\'$(rm- rf importantDir)\'\nLD_LIBRARY_PATH=/tmp/evil.so' \
> envvars
```
...or, even more simply (on Linux):
```
# save all your environment variables (as they existed at process startup)
cp /proc/self/environ envvars
```
Restoring from that file, and using it:
```
mapfile -d '' vars <envvars
env "${vars[#]}" node.js
```
But whatever you do, don't use eval
Remember that evil environment variable I set above? It's a good example of a variable that poorly-written code can't set correctly. If you try to run it through eval, it deletes all your files. If you try to read it from a newline-delimited (not NUL-delimited) file, it sets another LD_LIBRARY_PATH variable that tells your operating system to load a shared library from an untrusted/untrustworthy location. But those are just a few cases. Consider also:
## DO NOT DO THIS
AAA='BBB=1 CCC=2 DDD="value * with * spaces"'
eval $AAA node foo.js
Looks simple, right? Except that what eval does with it is not simple at all:
First, before eval is started, your parameters and globs are expanded. Let's say your current directory contains files named 1 and 2:
'eval' 'BBB=1' 'CCC=2' 'DDD="value' '1' '2' 'with' '1' '2' 'spaces"' 'node' 'foo.js'
Then, eval takes all the arguments it's passed, and gloms them all together into a single string.
eval "BBB=1 CCC=2 DDD="value 1 2 with 1 2 spaces" node foo.js
Then that string is parsed from the very beginning
...which means that if instead of having a file named 1 you had a file named $(rm -rf ~) (a perfectly valid filename!), you would have a very, very bad day.
Related
I have a Makefile, trying to loop over a series of strings in a recipe and make them lower case.
My goal: I have a series of commands I would like to run on different files with the appropriate suffix.
# files in directory: test_file1.txt test_file2.txt test_file3.txt
MYLIST = file1 file2 file3
recipe:
for name in $(MYLIST) ; do \
$(eval FILENAME=`echo $($name) | tr A-Z a-z`) \
echo "Final name : test_${FILENAME}.txt" ; \
done
My problem, FILENAME always resolves to blank:
File name: test_.txt
I hope to see:
File name: test_file1.txt
File name: test_file2.txt
File name: test_file3.txt
You cannot mix make functions and shell commands like this. Make works like this: when it decides that your target is out of date and it wants to run the recipe, first it will expand the entire recipe string. Second it sends that expanded string to the shell to run.
So in your case, the $(eval ...) (which is a make operation) is expanded one time, then the resulting string is passed to the shell to run. The shell runs the for loop and all that stuff.
You have to use shell variables here to store values obtained by running your shell for loop. You cannot use make variables or make functions.
In general if you ever think about using $(shell ...) or $(eval ...) inside a recipe, you are probably going down the wrong road.
I have all my env vars in .env files.
They get automatically loaded when I open my shell-terminal.
I normally render shell environment variables into my target files with envsubst. similar to the example below.
What I search is a solution where I can pass a dotenv-file as well my template-file to a script which outputs the rendered result.
Something like this:
aScript --input .env.production --template template-file.yml --output result.yml
I want to be able to parse different environment variables into my yaml. The output should be sealed via "Sealed secrets" and finally saved in the regarding kustomize folder
envsub.sh .env.staging templates/secrets/backend-secrets.yml | kubeseal -o yaml > kustomize/overlays/staging
I hope you get the idea.
example
.env.production-file:
FOO=bar
PASSWROD=abc
content of template-file.yml
stringData:
foo: $FOO
password: $PASSWORD
Then running this:
envsubst < template-file.yml > file-with-vars.yml
the result is:
stringData:
foo: bar
password: abc
My approach so far does not work because Dotenv also supports different environments like .env, .env.production, .env.staging asf..
What about:
#!/bin/sh
# envsub - subsitute environment variables
env=$1
template=$2
sh -c "
. \"$env\"
cat <<EOF
$(cat "$template")
EOF"
Usage:
./envsub .env.production template-file.yaml > result.yaml
A here-doc with an unquoted delimiter (EOF) expands variables, whilst preserving quotes, backslashes, and other shell sequences.
sh -c is used like eval, to expand the command substitution, then run that output through a here-doc.
Be aware that this extra level of indirection creates potential for code injection, if someone can modify the yaml file.
For example, adding this:
EOF
echo malicous commands
But it does get the result you want.
I have to introduce some templating over text configuration files (yaml, xml, json) that already contain bash-like syntax variables. I need to preserve existing bash-like variables untouched but substitute my ones. List is dynamic, variables should come from environment. Something like simple processor taking "$${MY_VAR}}" pattern but ignoring $MY_VAR. Preferably pure Bash or as small number of tooling required as possible.
Pattern could be $$(VAR) or anything that can be easily separated from ${VAR} and $VAR. The key limitation - it is intended for a docker container startup procedure injecting environment variables into provided service configuration templates and this way building this configuration. So something like Java or even Perl processing is not an option.
Does anybody have a simple approach?
I was using the following bash processing for such variable substitution where original files had no variables. But now I need something one step smarter.
# process input file ($1) placing output into ($2) with shell variables substitution.
process_file() {
set -e
eval "cat <<EOF
$(<$1)
EOF
" | cat > $2
}
Obvious clean solution that is too complex for Docker file because of number of packages needed:
perl -p -i -e 's/\$\{\{([^}]+)\}\}/defined $ENV{$1} ? $ENV{$1} : $&/eg' < test.json
This filters out ${{VAR}}, even better - only set ones.
Im trying to source a variable list which is populated into one single variable in bash.
I then want to source this single variable to the contents (which are other variables) of the variable are available to the script.
I want to achieve this without having to spool the sqlplus file then source this file (this already works as I tried it).
Please find below what Im trying:
#!/bin/bash
var_list=$(sqlplus -S /#mydatabase << EOF
set pagesize 0
set trimspool on
set headsep off
set echo off
set feedback off
set linesize 1000
set verify off
set termout off
select varlist from table;
EOF
)
#This already works when I echo any variable from the list
#echo "$var_list" > var_list.dat
#. var_list.dat
#echo "$var1, $var2, $var3"
#Im trying to achieve the following
. $(echo "var_list")
echo "$any_variable_from_var_list"
The contents of var_list from the database are as follows:
var1="Test1"
var2="Test2"
var3="Test3"
I also tried sourcing it other ways such as:
. <<< $(echo "$var_list")
. $(cat "$var_list")
Im not sure if I need to read in each line now using a while loop.
Any advice is appreciated.
You can:
. /dev/stdin <<<"$varlist"
<<< is a here string. It redirects the content of data behind <<< to standard input.
/dev/stdin represents standard input. So reading from the 0 file descriptor is like opening /dev/stdin and calling read() on resulting file descriptor.
Because source command needs a filename, we pass to is /dev/stdin and redirect the data to be read to standard input. That way source reads the commands from standard input thinking it's reading from file, while we pass our data to the input that we want to pass.
Using /dev/stdin for tools that expect a file is quite common. I have no idea what references to give, I'll link: bash manual here strings, Posix 7 base definitions 2.1.1p4 last bullet point, linux kernel documentation on /dev/ directory entires, bash manual shell builtins, maybe C99 7.19.3p7.
I needed a way to store dotenv values in files locally and vars for DevOps pipelines, so I could then source to the runtime environment on demand (from file when available and vars when not). More though, I needed to store different dotenv sets in different vars and use them based on the source branch (which I load to $ENV in .gitlab-ci.yml via export ENV=${CI_COMMIT_BRANCH:=develop}). With this I'll have developEnv, qaEnv, and productionEnv, each being a var containing it's appropriate dotenv contents (being redundant to be clear.)
unset FOO; # Clear so we can confirm loading after
ENV=develop; #
developEnv="VERSION=1.2.3
FOO=bar"; # Creating a simple dotenv in a var, with linebreak (as the files passed through will have)
envVarName=${ENV}Env # Our dynamic var-name
source <(cat <<< "${!envVarName}") # Using the dynamic name,
echo $FOO;
# bar
I would like to setup some files for development, staging and production with environment variables, for example:
application_root/development.env
KEY1=value1
KEY2=value2
There would be similar files staging.env and production.env.
I am looking for a couple different bash scripts which would allow the loading of all these variables in either development or staging/production.
In local development I want to effectively run export KEY1=value1 for each line in the file.
For staging/production I will be deploying to Heroku and would like to effectively run heroku config:set -a herokuappname KEY1=value1 for each line in the staging or production.env files.
I know there are some gems designed for doing this but it seems like this might be pretty simple. I also like the flexibility of having the .env files as simple lists of keys and values and not specifically being tied to any language/framework. If I would have to change something about the way these variables need to be loaded it would be a matter of changing the script but not the .env files.
In the simplest form, you can load the key-value pairs into a bash array as follows:
IFS=$'\n' read -d '' -ra nameValuePairs < ./development.env
In Bash v4+, it's even simpler:
readarray -t nameValuePairs < ./development.env
You can then pass the resulting "${nameValuePairs[#]}" array to commands such as export or heroku config:set ...; e.g.:
export "${nameValuePairs[#]}"
Note, however, that the above only works as intended if the input *.env file meets all of the following criteria:
the keys are syntactically valid shell variable names and the lines have the form <key>=<value>, with no whitespace around =
the lines contain no quoting and no leading or trailing whitespace
there are no empty/blank lines or comment lines in the file.
the values are confined to a single line each.
A different approach is needed with files that do not adhere to this strict format; for instance, this related question deals with files that may contain quoted values.
Below is the source code for a bash script named load_env (the .sh suffix is generally not necessary and ambiguous):
You'd invoke it with the *.env file of interest, and it would perform the appropriate action (running heroku config:set … or export) based on the filename.
However, as stated, you must source the script (using source or its effective bash alias, .) in order to create environment variables (export) visible to the current shell.
To prevent obscure failures, the script complains if you pass a development.env file and have invoked the script without sourcing.
Examples:
./load_env ./staging.dev
. ./load_env ./development.dev # !! Note the need to source
load_env source code
#!/usr/bin/env bash
# Helper function that keeps its aux. variables localized.
# Note that the function itself remains defined after sourced invocation, however.
configOrExport() {
local envFile=$1 doConfig=0 doExport=0 appName
case "$(basename "$envFile" '.env')" in
staging)
doConfig=1
# Set the desired app name here.
appName=stagingapp
;;
production)
doConfig=1
# Set the desired app name here.
appName=productionapp
;;
development)
doExport=1
;;
*)
echo "ERROR: Invalid or missing *.env file name: $(basename "$envFile" '.env')" >&2; exit 2
esac
# Make sure the file exists and is readable.
[[ -r "$envFile" ]] || { echo "ERROR: *.env file not found or not readable: $envFile" >&2; exit 2; }
# If variables must be exported, make sure the script is being sourced.
[[ $doExport -eq 1 && $0 == "$BASH_SOURCE" ]] && { echo "ERROR: To define environment variables, you must *source* this script." >&2; exit 2; }
# Read all key-value pairs from the *.env file into an array.
# Note: This assumes that:
# - the keys are syntactically valid shell variable names
# - the lines contain no quoting and no leading or trailing whitespace
# - there are no empty/blank lines or comment lines in the file.
IFS=$'\n' read -d '' -ra nameValuePairs < "$envFile"
# Run configuration command.
(( doConfig )) && { heroku config:set -a "$appName" "${nameValuePairs[#]}" || exit; }
# Export variables (define as environment variables).
(( doExport )) && { export "${nameValuePairs[#]}" || exit; }
}
# Invoke the helper function.
configOrExport "$#"