Bash: export .env variables [duplicate] - bash

Let's say I have .env file contains lines like below:
USERNAME=ABC
PASSWORD=PASS
Unlike the normal ones have export prefix so I cannot source the file directly.
What's the easiest way to create a shell script that loads content from .env file and set them as environment variables?

If your lines are valid, trusted shell but for the export command
This requires appropriate shell quoting. It's thus appropriate if you would have a line like foo='bar baz', but not if that same line would be written foo=bar baz
set -a # automatically export all variables
source .env
set +a
If your lines are not valid shell
The below reads key/value pairs, and does not expect or honor shell quoting.
while IFS== read -r key value; do
printf -v "$key" %s "$value" && export "$key"
done <.env

This will export everything in .env:
export $(xargs <.env)
Edit: this requires the environment values to not have whitespace. If this does not match your use case you can use the solution provided by Charles
Edit2: I recommend adding a function to your profile for this in any case so that you don't have to remember the details of set -a or how xargs works.

This is what I use:
function load_dotenv(){
# https://stackoverflow.com/a/66118031/134904
source <(cat $1 | sed -e '/^#/d;/^\s*$/d' -e "s/'/'\\\''/g" -e "s/=\(.*\)/='\1'/g")
}
set -a
[ -f "test.env" ] && load_dotenv "test.env"
set +a
If you're using direnv, know that it already supports .env files out of the box :)
Add this to your .envrc:
[ -f "test.env" ] && dotenv "test.env"
Docs for direnv's stdlib: https://direnv.net/man/direnv-stdlib.1.html

Found this:
http://www.commandlinefu.com/commands/view/12020/export-key-value-pairs-list-as-environment-variables
while read line; do export $line; done < <(cat input)
UPDATE So I've got it working as below:
#!/bin/sh
while read line; do export $line; done < .env

use command below on ubuntu
$ export $(cat .env)

Related

export environment varible from bash script with argument

I am trying to export environment variable using a bash script which takes one argument. I tried to run the script using source command, but it does not work.
source ./script.sh dev
My example script below
#!/bin/bash
# Check 1 argument is passed with the script
if [ $# -ne 1 ]
then
echo "Usage : $0 AWS account: e.g. $0 dev"
exit 0
fi
# convert the input to uppercase
aws_env=$( tr '[:lower:]' '[:upper:]' <<<"$1" )
if ! [[ "$aws_env" =~ ^(DEV|UAT|TRN|PROD)$ ]]; then
# check that correct account is provided
echo "Enter correct AWS account: dev or uat or trn or prod"
exit 0
else
# export environment variables
file="/home/xyz/.aws/key_${aws_env}"
IFS=$'\n' read -d '' -r -a lines < $file
export AWS_ACCESS_KEY_ID=${lines[0]}
export AWS_SECRET_ACCESS_KEY=${lines[1]}
export AWS_DEFAULT_REGION=ap-southeast-2
echo "AWS access keys and secret has been exported as environment variables for $aws_env account"
fi
Content of file /home/xyz/.aws/key_DEV. Its a sample only, not real keys.
$ cat key_DEV
123
xyz
When I say it does not work, nothing happens in the terminal, it closes the terminal when I run the script.
Further update:
When I run the script as is from the terminal without source (./script.sh dev) it seems to be working fine, with the debug (set -x), I can see all the outputs are correct.
However, the issue is when I run with source (source ./script.sh dev), then it fails (closes the terminal, now I know why, because of exit 0), and from the output captured from the debug command I can see that its not capturing $1 argument correctly. The error message "Enter correct AWS account: dev or uat or trn or prod". And, the value of $aws_env variable is blank.
I don't know why the two behaviors are different and how to fix it.
Final update:
The script seems to be fine. The issue was local to my computer. tr was defined as an alias in .bashrc file which was causing the problem. I just used typeset -u aws_env="$1" instead of aws_env=$( tr '[:lower:]' '[:upper:]' <<<"$1" ). Thank you all for helping me to get this one resolved, specifically #markp-fuso.
Try using mapfile instead of read like so...
#!/usr/bin/env bash
# Check 1 argument is passed with the script
if [ $# -ne 1 ]
then
echo "Usage : $0 AWS account: e.g. $0 dev"
exit 0
fi
# convert the input to uppercase
aws_env=$( tr '[:lower:]' '[:upper:]' <<<"$1" )
if ! [[ "$aws_env" =~ ^(DEV|UAT|TRN|PROD)$ ]]; then
# check that correct account is provided
echo "Enter correct AWS account: dev or uat or trn or prod"
exit 0
else
# export environment variables
file="/home/xyz/.aws/key_${aws_env}"
# IFS=$'\n' read -d '' -r -a lines < $file
mapfile -t lines < "$file"
export AWS_ACCESS_KEY_ID=${lines[0]}
export AWS_SECRET_ACCESS_KEY=${lines[1]}
export AWS_DEFAULT_REGION=ap-southeast-2
echo "AWS access keys and secret has been exported as environment variables for $aws_env account"
fi
Make sure you also put quotation marks around "$file"
Another little tip, Bash supports making variables uppercase directly, like so:
var="upper"
echo "${var^^}"

env -0 dump environment. But how to load it?

The linux command line tool env can dump the current environment.
Since there are some special characters I want to use env -0 (end each output line with 0 byte rather than newline).
But how to load this dump again?
Bash Version: 4.2.53
Don't use env; use declare -px, which outputs the values of exported variables in a form that can be re-executed.
$ declare -px > env.sh
$ source env.sh
This also gives you the possibility of saving non-exported variables as well, which env does not have access to: just use declare -p (dropping the -x option).
For example, if you wrote foo=$'hello\nworld', env produces the output
foo=hello
world
while declare -px produces the output
declare -x foo="hello
world"
If you want to load the export of env you can use what is described in Set environment variables from file:
env > env_file
set -o allexport
source env_file
set +o allexport
But if you happen to export with -0 it uses (from man env):
-0, --null
end each output line with 0 byte rather than newline
So you can loop through the file using 0 as the character delimiter to mark the end of the line (more description in What does IFS= do in this bash loop: cat file | while IFS= read -r line; do … done):
env -0 > env_file
while IFS= read -r -d $'\0' var
do
export "$var"
done < env_file

Bash: echo extract variables

Suppose there's a script called 'test.sh':
#!/bin/bash
while read line; do
APP=/apps echo "$line"
done < ./lines
And the 'lines':
cd $APP && pwd
If I bash test.sh, it prints out 'cd $APP && pwd'.
But when I type APP=/apps echo "cd $APP && pwd" in the terminal, it prints out 'cd /apps && pwd'.
Is it possible using echo to extract variables which are reading from a regular file?
Depending on the contents of the file, you may want to use eval:
#!/bin/bash
APP=/apps
while read line; do
eval "echo \"$line\"" # WARNING: dangerous
done < ./lines
However, eval is extremely dangerous. Although the quoting here will work for simple cases, it is quite easy to execute arbitrary commands by manipulating the input.
You should use eval to evaluate string line read from file
If you know the variable(s) you want to substitute, just substitute them.
sed 's%\$APP\>%/apps%g' ./lines

how can I turn config ini file into system environment in bash?

I have config files like below
# this is sample config file like config.ini
APP_HOME=/usr/local/bin
DATABASE_DIR=/usr/local/database
Normally in order to be access as system environment, it shall use export in front
# this is sample config file like config.rc
export APP_HOME=/usr/local/bin
export DATABASE_DIR=/usr/local/database
And I can
$ source config.rc
$ echo "APP_HOME is $APP_HOME"
APP_HOME is /usr/local/bin
Now Which is the easiest way one line command to turn config file config.ini into system environment ? could be combine with sed/awk command
You can tell the shell to automatically export variables, if you really do need this to be exported.
set -a # turn on automatic export
source config.ini # execute all commands in the file
set +a # turn off automatic export
sed 's/^/export /g' config.ini > config.sh && source config.sh
The sed command add 'export ' to the beginning for each line of config.ini and then redirect the output to config.sh, then the source shell builtin read and execute the exports in the current shell environment.
To export the variables in that file after you source if:
export $(grep -oP '^\s*\K[_[:alpha:]]\w+(?==)' config.ini)
one line:
cat 1.ini | awk '{print "export "$0}'

use external file with variables

The following is iptable save file, which I modified by setting some variables like you see below.
-A OUTPUT -o $EXTIF -s $UNIVERSE -d $INTNET -j REJECT
I also have a bash script which is defining this variables and should call iptables-restore with the save file above.
#!/bin/sh
EXTIF="eth0"
INTIF="eth1"
INTIP="192.168.0.1/32"
EXTIP=$(/sbin/ip addr show dev "$EXTIF" | perl -lne 'if(/inet (\S+)/){print$1;last}');
UNIVERSE="0.0.0.0/0"
INTNET="192.168.0.1/24"
Now I need to use
/sbin/iptables-restore <the content of iptables save file>
in bash script and somehow insert the text file on top to this script, so the variables will be initialized. Is there any way to do that?
UPDATE: even tried this
/sbin/iptables-restore -v <<-EOF;
$(</etc/test.txt)
EOF
Something like this:
while read line; do eval "echo ${line}"; done < iptables.save.file | /sbin/iptables-restore -v
or more nicely formatted:
while read line
do eval "echo ${line}"
done < iptables.save.file | /sbin/iptables-restore -v
The eval of a string forces the variable expansion stuff.
Use . (dot) char to include one shell script to another:
#!/bin/sh
. /path/to/another/script
In your shell script:
. /path/to/variable-definitions
/sbin/iptables-restore < $(eval echo "$(</path/to/template-file)")
or possibly
/sbin/iptables-restore < <(eval echo "$(</path/to/template-file)")

Resources