Related
TL;DR: How do I export a set of key/value pairs from a text file into the shell environment?
For the record, below is the original version of the question, with examples.
I'm writing a script in bash which parses files with 3 variables in a certain folder, this is one of them:
MINIENTREGA_FECHALIMITE="2011-03-31"
MINIENTREGA_FICHEROS="informe.txt programa.c"
MINIENTREGA_DESTINO="./destino/entrega-prac1"
This file is stored in ./conf/prac1
My script minientrega.sh then parses the file using this code:
cat ./conf/$1 | while read line; do
export $line
done
But when I execute minientrega.sh prac1 in the command line it doesn't set the environment variables
I also tried using source ./conf/$1 but the same problem still applies
Maybe there is some other way to do this, I just need to use the environment variables of the file I pass as the argument of my script.
This might be helpful:
export $(cat .env | xargs) && rails c
Reason why I use this is if I want to test .env stuff in my rails console.
gabrielf came up with a good way to keep the variables local. This solves the potential problem when going from project to project.
env $(cat .env | xargs) rails
I've tested this with bash 3.2.51(1)-release
Update:
To ignore lines that start with #, use this (thanks to Pete's comment):
export $(grep -v '^#' .env | xargs)
And if you want to unset all of the variables defined in the file, use this:
unset $(grep -v '^#' .env | sed -E 's/(.*)=.*/\1/' | xargs)
Update:
To also handle values with spaces, use:
export $(grep -v '^#' .env | xargs -d '\n')
on GNU systems -- or:
export $(grep -v '^#' .env | xargs -0)
on BSD systems.
From this answer you can auto-detect the OS with this:
export-env.sh
#!/bin/sh
## Usage:
## . ./export-env.sh ; $COMMAND
## . ./export-env.sh ; echo ${MINIENTREGA_FECHALIMITE}
unamestr=$(uname)
if [ "$unamestr" = 'Linux' ]; then
export $(grep -v '^#' .env | xargs -d '\n')
elif [ "$unamestr" = 'FreeBSD' ] || [ "$unamestr" = 'Darwin' ]; then
export $(grep -v '^#' .env | xargs -0)
fi
-o allexport enables all following variable definitions to be exported. +o allexport disables this feature.
set -o allexport
source conf-file
set +o allexport
Problem with your approach is the export in the while loop is happening in a sub shell, and those variable will not be available in current shell (parent shell of while loop).
Add export command in the file itself:
export MINIENTREGA_FECHALIMITE="2011-03-31"
export MINIENTREGA_FICHEROS="informe.txt programa.c"
export MINIENTREGA_DESTINO="./destino/entrega-prac1"
Then you need to source in the file in current shell using:
. ./conf/prac1
OR
source ./conf/prac1
set -a
. ./env.txt
set +a
If env.txt is like:
VAR1=1
VAR2=2
VAR3=3
...
Explanations
-a is equivalent to allexport. In other words, every variable assignment in the shell is exported into the environment (to be used by multiple child processes). More information can be found in the Set builtin documentation:
-a Each variable or function that is created or modified is given the export attribute and marked for export to the environment of subsequent commands.
Using ‘+’ rather than ‘-’ causes these options to be turned off. The options can also be used upon invocation of the shell. The current set of options may be found in $-.
I found the most efficient way is:
export $(xargs < .env)
Explanation
When we have a .env file like this:
key=val
foo=bar
run xargs < .env will get key=val foo=bar
so we will get an export key=val foo=bar and it's exactly what we need!
Limitation
It doesn't handle cases where the values have spaces in them. Commands such as env produce this format. – #Shardj
The allexport option is mentioned in a couple of other answers here, for which set -a is the shortcut. Sourcing the .env really is better than looping over lines and exporting because it allows for comments, blank lines, and even environment variables generated by commands. My .bashrc includes the following:
# .env loading in the shell
dotenv () {
set -a
[ -f .env ] && . .env
set +a
}
# Run dotenv on login
dotenv
# Run dotenv on every new directory
cd () {
builtin cd $#
dotenv
}
eval $(cat .env | sed 's/^/export /')
The problem with source is that it requires the file to have a proper bash syntax, and some special characters will ruin it: =, ", ', <, >, and others. So in some cases you can just
source development.env
and it will work.
This version, however, withstands every special character in values:
set -a
source <(cat development.env | \
sed -e '/^#/d;/^\s*$/d' -e "s/'/'\\\''/g" -e "s/=\(.*\)/='\1'/g")
set +a
Explanation:
-a means that every bash variable would become an environment variable
/^#/d removes comments (strings that start with #)
/^\s*$/d removes empty strings, including whitespace
"s/'/'\\\''/g" replaces every single quote with '\'', which is a trick sequence in bash to produce a quote :)
"s/=\(.*\)/='\1'/g" converts every a=b into a='b'
As a result, you are able to use special characters :)
To debug this code, replace source with cat and you'll see what this command produces.
Here is another sed solution, which does not run eval or require ruby:
source <(sed -E -n 's/[^#]+/export &/ p' ~/.env)
This adds export, keeping comments on lines starting with a comment.
.env contents
A=1
#B=2
sample run
$ sed -E -n 's/[^#]+/export &/ p' ~/.env
export A=1
#export B=2
I found this especially useful when constructing such a file for loading in a systemd unit file, with EnvironmentFile.
Not exactly sure why, or what I missed, but after running trough most of the answers and failing. I realized that with this .env file:
MY_VAR="hello there!"
MY_OTHER_VAR=123
I could simply do this:
source .env
echo $MY_VAR
Outputs: Hello there!
Seems to work just fine in Ubuntu linux.
I have upvoted user4040650's answer because it's both simple, and it allows comments in the file (i.e. lines starting with #), which is highly desirable for me, as comments explaining the variables can be added. Just rewriting in the context of the original question.
If the script is callled as indicated: minientrega.sh prac1, then minientrega.sh could have:
set -a # export all variables created next
source $1
set +a # stop exporting
# test that it works
echo "Ficheros: $MINIENTREGA_FICHEROS"
The following was extracted from the set documentation:
This builtin is so complicated that it deserves its own section. set
allows you to change the values of shell options and set the
positional parameters, or to display the names and values of shell
variables.
set [--abefhkmnptuvxBCEHPT] [-o option-name] [argument …] set
[+abefhkmnptuvxBCEHPT] [+o option-name] [argument …]
If no options or arguments are supplied, set displays the names and values of all shell
variables and functions, sorted according to the current locale, in a
format that may be reused as input for setting or resetting the
currently-set variables. Read-only variables cannot be reset. In POSIX
mode, only shell variables are listed.
When options are supplied, they set or unset shell attributes.
Options, if specified, have the following meanings:
-a Each variable or function that is created or modified is given the export attribute and marked for export to the environment of
subsequent commands.
And this as well:
Using ‘+’ rather than ‘-’ causes these options to be turned off. The
options can also be used upon invocation of the shell. The current set
of options may be found in $-.
Improving on Silas Paul's answer
exporting the variables on a subshell makes them local to the command.
(export $(cat .env | xargs) && rails c)
The shortest way I found:
Your .env file:
VARIABLE_NAME="A_VALUE"
Then just
. ./.env && echo ${VARIABLE_NAME}
Bonus: Because it's a short one-liner, it's very useful in package.json file
"scripts": {
"echo:variable": ". ./.env && echo ${VARIABLE_NAME}"
}
SAVE=$(set +o | grep allexport) && set -o allexport && . .env; eval "$SAVE"
This will save/restore your original options, whatever they may be.
Using set -o allexport has the advantage of properly skipping comments without a regex.
set +o by itself outputs all your current options in a format that bash can later execute. Also handy: set -o by itself, outputs all your current options in human-friendly format.
Here's my variant:
with_env() {
(set -a && . ./.env && "$#")
}
compared with the previous solutions:
it does not leak variables outside scope (values from .env are not exposed to caller)
does not clobber set options
returns exit code of the executed command
uses posix compatible set -a
uses . instead of source to avoid bashism
command is not invoked if .env loading fails
with_env rails console
If env supports the -S option one may use newlines or escape characters like \n or \t (see env):
env -S "$(cat .env)" command
.env file example:
KEY="value with space\nnewline\ttab\tand
multiple
lines"
Test:
env -S "$(cat .env)" sh -c 'echo "$KEY"'
Simpler:
grab the content of the file
remove any blank lines (just incase you separated some stuff)
remove any comments (just incase you added some...)
add export to all the lines
eval the whole thing
eval $(cat .env | sed -e /^$/d -e /^#/d -e 's/^/export /')
Another option (you don't have to run eval (thanks to #Jaydeep)):
export $(cat .env | sed -e /^$/d -e /^#/d | xargs)
Lastly, if you want to make your life REALLY easy, add this to your ~/.bash_profile:
function source_envfile() { export $(cat $1 | sed -e /^$/d -e /^#/d | xargs); }
(MAKE SURE YOU RELOAD YOUR BASH SETTINGS!!! source ~/.bash_profile or.. just make a new tab/window and problem solved) you call it like this: source_envfile .env
Use shdotenv
dotenv support for shell and POSIX-compliant .env syntax specification
https://github.com/ko1nksm/shdotenv
eval "$(shdotenv)"
Usage
Usage: shdotenv [OPTION]... [--] [COMMAND [ARG]...]
-d, --dialect DIALECT Specify the .env dialect [default: posix]
(posix, ruby, node, python, php, go, rust, docker)
-s, --shell SHELL Output in the specified shell format [default: posix]
(posix, fish)
-e, --env ENV_PATH Location of the .env file [default: .env]
Multiple -e options are allowed
-o, --overload Overload predefined environment variables
-n, --noexport Do not export keys without export prefix
-g, --grep PATTERN Output only those that match the regexp pattern
-k, --keyonly Output only variable names
-q, --quiet Suppress all output
-v, --version Show the version and exit
-h, --help Show this message and exit
Requirements
shdotenv is a single file shell script with embedded awk script.
POSIX shell (dash, bash, ksh, zsh, etc)
awk (gawk, nawk, mawk, busybox awk)
I work with docker-compose and .env files on Mac, and wanted to import the .env into my bash shell (for testing), and the "best" answer here was tripping up on the following variable:
.env
NODE_ARGS=--expose-gc --max_old_space_size=2048
Solution
So I ended up using eval, and wrapping my env var defs in single quotes.
eval $(grep -v -e '^#' .env | xargs -I {} echo export \'{}\')
Bash Version
$ /bin/bash --version
GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin18)
Copyright (C) 2007 Free Software Foundation, Inc.
t=$(mktemp) && export -p > "$t" && set -a && . ./.env && set +a && . "$t" && rm "$t" && unset t
How it works
Create temp file.
Write all current environment variables values to the temp file.
Enable exporting of all declared variables in the sources script to the environment.
Read .env file. All variables will be exported into current environment.
Disable exporting of all declared variables in the sources script to the environment.
Read the contents of the temp file. Every line would have declare -x VAR="val" that would export each of the variables into environment.
Remove temp file.
Unset the variable holding temp file name.
Features
Preserves values of the variables already set in the environment
.env can have comments
.env can have empty lines
.env does not require special header or footer like in the other answers (set -a and set +a)
.env does not require to have export for every value
one-liner
You can use your original script to set the variables, but you need to call it the following way (with stand-alone dot):
. ./minientrega.sh
Also there might be an issue with cat | while read approach. I would recommend to use the approach while read line; do .... done < $FILE.
Here is a working example:
> cat test.conf
VARIABLE_TMP1=some_value
> cat run_test.sh
#/bin/bash
while read line; do export "$line";
done < test.conf
echo "done"
> . ./run_test.sh
done
> echo $VARIABLE_TMP1
some_value
Building on other answers, here is a way to export only a subset of lines in a file, including values with spaces like PREFIX_ONE="a word":
set -a
. <(grep '^[ ]*PREFIX_' conf-file)
set +a
My requirements were:
simple .env file without export prefixes (for compatibility with dotenv)
supporting values in quotes: TEXT="alpha bravo charlie"
supporting comments prefixed with # and empty lines
universal for both mac/BSD and linux/GNU
Full working version compiled from the answers above:
set -o allexport
eval $(grep -v '^#' .env | sed 's/^/export /')
set +o allexport
I have issues with the earlier suggested solutions:
#anubhava's solution makes writing bash friendly configuration files very annoying very fast, and also - you may not want to always export your configuration.
#Silas Paul solution breaks when you have variables that have spaces or other characters that work well in quoted values, but $() makes a mess out of.
Here is my solution, which is still pretty terrible IMO - and doesn't solve the "export only to one child" problem addressed by Silas (though you can probably run it in a sub-shell to limit the scope):
source .conf-file
export $(cut -d= -f1 < .conf-file)
My .env:
#!/bin/bash
set -a # export all variables
#comments as usual, this is a bash script
USER=foo
PASS=bar
set +a #stop exporting variables
Invoking:
source .env; echo $USER; echo $PASS
Reference https://unix.stackexchange.com/questions/79068/how-to-export-variables-that-are-set-all-at-once
My version :
I print the file, remove commented lines, emptylines, and I split key/value from "=" sign. Then I just apply the export command.
The advantage of this solution is the file can contain special chars in values, like pipes, html tags, etc., and the value doesn't have to be surrounded by quotes, like a real properties file.
# Single line version
cat myenvfile.properties | grep -v '^#' | grep '=' | while read line; do IFS=\= read k v <<< $line; export $k="$v"; done
# Mutliline version:
cat myenvfile.properties | grep -v '^#' | grep '=' | while read line; do
IFS=\= read k v <<< $line
export $k="$v"
done
Here's my take on this. I had the following requirements:
Ignore commented lines
Allow spaces in the value
Allow empty lines
Ability to pass a custom env file while defaulting to .env
Allow exporting as well as running commands inline
Exit if env file doesn't exist
source_env() {
env=${1:-.env}
[ ! -f "${env}" ] && { echo "Env file ${env} doesn't exist"; return 1; }
eval $(sed -e '/^\s*$/d' -e '/^\s*#/d' -e 's/=/="/' -e 's/$/"/' -e 's/^/export /' "${env}")
}
Usage after saving the function to your .bash_profile or equivalent:
source_env # load default .env file
source_env .env.dev # load custom .env file
(source_env && COMMAND) # run command without saving vars to environment
Inspired by Javier and some of the other comments.
White spaces in the value
There are many great answers here, but I found them all lacking support for white space in the value:
DATABASE_CLIENT_HOST=host db-name db-user 0.0.0.0/0 md5
I have found 2 solutions that work whith such values with support for empty lines and comments.
One based on sed and #javier-buzzi answer:
source <(sed -e /^$/d -e /^#/d -e 's/.*/declare -x "&"/g' .env)
And one with read line in a loop based on #john1024 answer
while read -r line; do declare -x "$line"; done < <(egrep -v "(^#|^\s|^$)" .env)
The key here is in using declare -x and putting line in double quotes. I don't know why but when you reformat the loop code to multiple lines it won't work — I'm no bash programmer, I just gobbled together these, it's still magic to me :)
First, create an environment file that will have all the key-value pair of the environments like below and named it whatever you like in my case its env_var.env
MINIENTREGA_FECHALIMITE="2011-03-31"
MINIENTREGA_FICHEROS="informe.txt programa.c"
MINIENTREGA_DESTINO="./destino/entrega-prac1"
Then create a script that will export all the environment variables for the python environment like below and name it like export_env.sh
#!/usr/bin/env bash
ENV_FILE="$1"
CMD=${#:2}
set -o allexport
source $ENV_FILE
set +o allexport
$CMD
This script will take the first argument as the environment file then export all the environment variable in that file and then run the command after that.
USAGE:
./export_env.sh env_var.env python app.py
Modified from #Dan Kowalczyk
I put this in ~/.bashrc.
set -a
. ./.env >/dev/null 2>&1
set +a
Cross-compatible very well with Oh-my-Zsh's dotenv plugin. (There is Oh-my-bash, but it doesn't have dotenv plugin.)
TL;DR: How do I export a set of key/value pairs from a text file into the shell environment?
For the record, below is the original version of the question, with examples.
I'm writing a script in bash which parses files with 3 variables in a certain folder, this is one of them:
MINIENTREGA_FECHALIMITE="2011-03-31"
MINIENTREGA_FICHEROS="informe.txt programa.c"
MINIENTREGA_DESTINO="./destino/entrega-prac1"
This file is stored in ./conf/prac1
My script minientrega.sh then parses the file using this code:
cat ./conf/$1 | while read line; do
export $line
done
But when I execute minientrega.sh prac1 in the command line it doesn't set the environment variables
I also tried using source ./conf/$1 but the same problem still applies
Maybe there is some other way to do this, I just need to use the environment variables of the file I pass as the argument of my script.
This might be helpful:
export $(cat .env | xargs) && rails c
Reason why I use this is if I want to test .env stuff in my rails console.
gabrielf came up with a good way to keep the variables local. This solves the potential problem when going from project to project.
env $(cat .env | xargs) rails
I've tested this with bash 3.2.51(1)-release
Update:
To ignore lines that start with #, use this (thanks to Pete's comment):
export $(grep -v '^#' .env | xargs)
And if you want to unset all of the variables defined in the file, use this:
unset $(grep -v '^#' .env | sed -E 's/(.*)=.*/\1/' | xargs)
Update:
To also handle values with spaces, use:
export $(grep -v '^#' .env | xargs -d '\n')
on GNU systems -- or:
export $(grep -v '^#' .env | xargs -0)
on BSD systems.
From this answer you can auto-detect the OS with this:
export-env.sh
#!/bin/sh
## Usage:
## . ./export-env.sh ; $COMMAND
## . ./export-env.sh ; echo ${MINIENTREGA_FECHALIMITE}
unamestr=$(uname)
if [ "$unamestr" = 'Linux' ]; then
export $(grep -v '^#' .env | xargs -d '\n')
elif [ "$unamestr" = 'FreeBSD' ] || [ "$unamestr" = 'Darwin' ]; then
export $(grep -v '^#' .env | xargs -0)
fi
-o allexport enables all following variable definitions to be exported. +o allexport disables this feature.
set -o allexport
source conf-file
set +o allexport
Problem with your approach is the export in the while loop is happening in a sub shell, and those variable will not be available in current shell (parent shell of while loop).
Add export command in the file itself:
export MINIENTREGA_FECHALIMITE="2011-03-31"
export MINIENTREGA_FICHEROS="informe.txt programa.c"
export MINIENTREGA_DESTINO="./destino/entrega-prac1"
Then you need to source in the file in current shell using:
. ./conf/prac1
OR
source ./conf/prac1
set -a
. ./env.txt
set +a
If env.txt is like:
VAR1=1
VAR2=2
VAR3=3
...
Explanations
-a is equivalent to allexport. In other words, every variable assignment in the shell is exported into the environment (to be used by multiple child processes). More information can be found in the Set builtin documentation:
-a Each variable or function that is created or modified is given the export attribute and marked for export to the environment of subsequent commands.
Using ‘+’ rather than ‘-’ causes these options to be turned off. The options can also be used upon invocation of the shell. The current set of options may be found in $-.
I found the most efficient way is:
export $(xargs < .env)
Explanation
When we have a .env file like this:
key=val
foo=bar
run xargs < .env will get key=val foo=bar
so we will get an export key=val foo=bar and it's exactly what we need!
Limitation
It doesn't handle cases where the values have spaces in them. Commands such as env produce this format. – #Shardj
The allexport option is mentioned in a couple of other answers here, for which set -a is the shortcut. Sourcing the .env really is better than looping over lines and exporting because it allows for comments, blank lines, and even environment variables generated by commands. My .bashrc includes the following:
# .env loading in the shell
dotenv () {
set -a
[ -f .env ] && . .env
set +a
}
# Run dotenv on login
dotenv
# Run dotenv on every new directory
cd () {
builtin cd $#
dotenv
}
eval $(cat .env | sed 's/^/export /')
The problem with source is that it requires the file to have a proper bash syntax, and some special characters will ruin it: =, ", ', <, >, and others. So in some cases you can just
source development.env
and it will work.
This version, however, withstands every special character in values:
set -a
source <(cat development.env | \
sed -e '/^#/d;/^\s*$/d' -e "s/'/'\\\''/g" -e "s/=\(.*\)/='\1'/g")
set +a
Explanation:
-a means that every bash variable would become an environment variable
/^#/d removes comments (strings that start with #)
/^\s*$/d removes empty strings, including whitespace
"s/'/'\\\''/g" replaces every single quote with '\'', which is a trick sequence in bash to produce a quote :)
"s/=\(.*\)/='\1'/g" converts every a=b into a='b'
As a result, you are able to use special characters :)
To debug this code, replace source with cat and you'll see what this command produces.
Here is another sed solution, which does not run eval or require ruby:
source <(sed -E -n 's/[^#]+/export &/ p' ~/.env)
This adds export, keeping comments on lines starting with a comment.
.env contents
A=1
#B=2
sample run
$ sed -E -n 's/[^#]+/export &/ p' ~/.env
export A=1
#export B=2
I found this especially useful when constructing such a file for loading in a systemd unit file, with EnvironmentFile.
Not exactly sure why, or what I missed, but after running trough most of the answers and failing. I realized that with this .env file:
MY_VAR="hello there!"
MY_OTHER_VAR=123
I could simply do this:
source .env
echo $MY_VAR
Outputs: Hello there!
Seems to work just fine in Ubuntu linux.
I have upvoted user4040650's answer because it's both simple, and it allows comments in the file (i.e. lines starting with #), which is highly desirable for me, as comments explaining the variables can be added. Just rewriting in the context of the original question.
If the script is callled as indicated: minientrega.sh prac1, then minientrega.sh could have:
set -a # export all variables created next
source $1
set +a # stop exporting
# test that it works
echo "Ficheros: $MINIENTREGA_FICHEROS"
The following was extracted from the set documentation:
This builtin is so complicated that it deserves its own section. set
allows you to change the values of shell options and set the
positional parameters, or to display the names and values of shell
variables.
set [--abefhkmnptuvxBCEHPT] [-o option-name] [argument …] set
[+abefhkmnptuvxBCEHPT] [+o option-name] [argument …]
If no options or arguments are supplied, set displays the names and values of all shell
variables and functions, sorted according to the current locale, in a
format that may be reused as input for setting or resetting the
currently-set variables. Read-only variables cannot be reset. In POSIX
mode, only shell variables are listed.
When options are supplied, they set or unset shell attributes.
Options, if specified, have the following meanings:
-a Each variable or function that is created or modified is given the export attribute and marked for export to the environment of
subsequent commands.
And this as well:
Using ‘+’ rather than ‘-’ causes these options to be turned off. The
options can also be used upon invocation of the shell. The current set
of options may be found in $-.
Improving on Silas Paul's answer
exporting the variables on a subshell makes them local to the command.
(export $(cat .env | xargs) && rails c)
The shortest way I found:
Your .env file:
VARIABLE_NAME="A_VALUE"
Then just
. ./.env && echo ${VARIABLE_NAME}
Bonus: Because it's a short one-liner, it's very useful in package.json file
"scripts": {
"echo:variable": ". ./.env && echo ${VARIABLE_NAME}"
}
SAVE=$(set +o | grep allexport) && set -o allexport && . .env; eval "$SAVE"
This will save/restore your original options, whatever they may be.
Using set -o allexport has the advantage of properly skipping comments without a regex.
set +o by itself outputs all your current options in a format that bash can later execute. Also handy: set -o by itself, outputs all your current options in human-friendly format.
Here's my variant:
with_env() {
(set -a && . ./.env && "$#")
}
compared with the previous solutions:
it does not leak variables outside scope (values from .env are not exposed to caller)
does not clobber set options
returns exit code of the executed command
uses posix compatible set -a
uses . instead of source to avoid bashism
command is not invoked if .env loading fails
with_env rails console
If env supports the -S option one may use newlines or escape characters like \n or \t (see env):
env -S "$(cat .env)" command
.env file example:
KEY="value with space\nnewline\ttab\tand
multiple
lines"
Test:
env -S "$(cat .env)" sh -c 'echo "$KEY"'
Simpler:
grab the content of the file
remove any blank lines (just incase you separated some stuff)
remove any comments (just incase you added some...)
add export to all the lines
eval the whole thing
eval $(cat .env | sed -e /^$/d -e /^#/d -e 's/^/export /')
Another option (you don't have to run eval (thanks to #Jaydeep)):
export $(cat .env | sed -e /^$/d -e /^#/d | xargs)
Lastly, if you want to make your life REALLY easy, add this to your ~/.bash_profile:
function source_envfile() { export $(cat $1 | sed -e /^$/d -e /^#/d | xargs); }
(MAKE SURE YOU RELOAD YOUR BASH SETTINGS!!! source ~/.bash_profile or.. just make a new tab/window and problem solved) you call it like this: source_envfile .env
Use shdotenv
dotenv support for shell and POSIX-compliant .env syntax specification
https://github.com/ko1nksm/shdotenv
eval "$(shdotenv)"
Usage
Usage: shdotenv [OPTION]... [--] [COMMAND [ARG]...]
-d, --dialect DIALECT Specify the .env dialect [default: posix]
(posix, ruby, node, python, php, go, rust, docker)
-s, --shell SHELL Output in the specified shell format [default: posix]
(posix, fish)
-e, --env ENV_PATH Location of the .env file [default: .env]
Multiple -e options are allowed
-o, --overload Overload predefined environment variables
-n, --noexport Do not export keys without export prefix
-g, --grep PATTERN Output only those that match the regexp pattern
-k, --keyonly Output only variable names
-q, --quiet Suppress all output
-v, --version Show the version and exit
-h, --help Show this message and exit
Requirements
shdotenv is a single file shell script with embedded awk script.
POSIX shell (dash, bash, ksh, zsh, etc)
awk (gawk, nawk, mawk, busybox awk)
I work with docker-compose and .env files on Mac, and wanted to import the .env into my bash shell (for testing), and the "best" answer here was tripping up on the following variable:
.env
NODE_ARGS=--expose-gc --max_old_space_size=2048
Solution
So I ended up using eval, and wrapping my env var defs in single quotes.
eval $(grep -v -e '^#' .env | xargs -I {} echo export \'{}\')
Bash Version
$ /bin/bash --version
GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin18)
Copyright (C) 2007 Free Software Foundation, Inc.
t=$(mktemp) && export -p > "$t" && set -a && . ./.env && set +a && . "$t" && rm "$t" && unset t
How it works
Create temp file.
Write all current environment variables values to the temp file.
Enable exporting of all declared variables in the sources script to the environment.
Read .env file. All variables will be exported into current environment.
Disable exporting of all declared variables in the sources script to the environment.
Read the contents of the temp file. Every line would have declare -x VAR="val" that would export each of the variables into environment.
Remove temp file.
Unset the variable holding temp file name.
Features
Preserves values of the variables already set in the environment
.env can have comments
.env can have empty lines
.env does not require special header or footer like in the other answers (set -a and set +a)
.env does not require to have export for every value
one-liner
You can use your original script to set the variables, but you need to call it the following way (with stand-alone dot):
. ./minientrega.sh
Also there might be an issue with cat | while read approach. I would recommend to use the approach while read line; do .... done < $FILE.
Here is a working example:
> cat test.conf
VARIABLE_TMP1=some_value
> cat run_test.sh
#/bin/bash
while read line; do export "$line";
done < test.conf
echo "done"
> . ./run_test.sh
done
> echo $VARIABLE_TMP1
some_value
Building on other answers, here is a way to export only a subset of lines in a file, including values with spaces like PREFIX_ONE="a word":
set -a
. <(grep '^[ ]*PREFIX_' conf-file)
set +a
My requirements were:
simple .env file without export prefixes (for compatibility with dotenv)
supporting values in quotes: TEXT="alpha bravo charlie"
supporting comments prefixed with # and empty lines
universal for both mac/BSD and linux/GNU
Full working version compiled from the answers above:
set -o allexport
eval $(grep -v '^#' .env | sed 's/^/export /')
set +o allexport
I have issues with the earlier suggested solutions:
#anubhava's solution makes writing bash friendly configuration files very annoying very fast, and also - you may not want to always export your configuration.
#Silas Paul solution breaks when you have variables that have spaces or other characters that work well in quoted values, but $() makes a mess out of.
Here is my solution, which is still pretty terrible IMO - and doesn't solve the "export only to one child" problem addressed by Silas (though you can probably run it in a sub-shell to limit the scope):
source .conf-file
export $(cut -d= -f1 < .conf-file)
My .env:
#!/bin/bash
set -a # export all variables
#comments as usual, this is a bash script
USER=foo
PASS=bar
set +a #stop exporting variables
Invoking:
source .env; echo $USER; echo $PASS
Reference https://unix.stackexchange.com/questions/79068/how-to-export-variables-that-are-set-all-at-once
My version :
I print the file, remove commented lines, emptylines, and I split key/value from "=" sign. Then I just apply the export command.
The advantage of this solution is the file can contain special chars in values, like pipes, html tags, etc., and the value doesn't have to be surrounded by quotes, like a real properties file.
# Single line version
cat myenvfile.properties | grep -v '^#' | grep '=' | while read line; do IFS=\= read k v <<< $line; export $k="$v"; done
# Mutliline version:
cat myenvfile.properties | grep -v '^#' | grep '=' | while read line; do
IFS=\= read k v <<< $line
export $k="$v"
done
Here's my take on this. I had the following requirements:
Ignore commented lines
Allow spaces in the value
Allow empty lines
Ability to pass a custom env file while defaulting to .env
Allow exporting as well as running commands inline
Exit if env file doesn't exist
source_env() {
env=${1:-.env}
[ ! -f "${env}" ] && { echo "Env file ${env} doesn't exist"; return 1; }
eval $(sed -e '/^\s*$/d' -e '/^\s*#/d' -e 's/=/="/' -e 's/$/"/' -e 's/^/export /' "${env}")
}
Usage after saving the function to your .bash_profile or equivalent:
source_env # load default .env file
source_env .env.dev # load custom .env file
(source_env && COMMAND) # run command without saving vars to environment
Inspired by Javier and some of the other comments.
White spaces in the value
There are many great answers here, but I found them all lacking support for white space in the value:
DATABASE_CLIENT_HOST=host db-name db-user 0.0.0.0/0 md5
I have found 2 solutions that work whith such values with support for empty lines and comments.
One based on sed and #javier-buzzi answer:
source <(sed -e /^$/d -e /^#/d -e 's/.*/declare -x "&"/g' .env)
And one with read line in a loop based on #john1024 answer
while read -r line; do declare -x "$line"; done < <(egrep -v "(^#|^\s|^$)" .env)
The key here is in using declare -x and putting line in double quotes. I don't know why but when you reformat the loop code to multiple lines it won't work — I'm no bash programmer, I just gobbled together these, it's still magic to me :)
First, create an environment file that will have all the key-value pair of the environments like below and named it whatever you like in my case its env_var.env
MINIENTREGA_FECHALIMITE="2011-03-31"
MINIENTREGA_FICHEROS="informe.txt programa.c"
MINIENTREGA_DESTINO="./destino/entrega-prac1"
Then create a script that will export all the environment variables for the python environment like below and name it like export_env.sh
#!/usr/bin/env bash
ENV_FILE="$1"
CMD=${#:2}
set -o allexport
source $ENV_FILE
set +o allexport
$CMD
This script will take the first argument as the environment file then export all the environment variable in that file and then run the command after that.
USAGE:
./export_env.sh env_var.env python app.py
Modified from #Dan Kowalczyk
I put this in ~/.bashrc.
set -a
. ./.env >/dev/null 2>&1
set +a
Cross-compatible very well with Oh-my-Zsh's dotenv plugin. (There is Oh-my-bash, but it doesn't have dotenv plugin.)
I have a .sh script which uses environment variables I export before. I would like my variables to be like this: VAR1="${libDir}/test" so that in my script, the ${libDir} will be replaced with some value. I declared my export like below: export VAR1="${libDir}/test" but in my script the libDir is not taken into account at all. Can I do it this way?
No, you can't have a variable that contains a variable reference that gets substituted in a "delayed" fashion:
$ libDir=foo
$ VAR1="${libDir}/test"
$ libDir=bar
$ echo "$VAR1"
foo/test
You could get around this using eval, but you shouldn't.
However, a function will do exactly what you want, with the price of a touch extra syntax:
$ var1() {
echo "${libDir}/test"
}
$ libDir=foo
$ echo "$(var1)"
foo/test
$ libDir=bar
$ echo "$(var1)"
bar/test
I am trying to make my scripts more generic and hence trying to pass parameter.
I have config file which contains variables (which are used in the scripts) and in the scripts ,I am sourcing (source command) the file in another scripts (ksh).
Config file contains:
p2020_m23_ORACLE_USERNAME=sanjeeb
Parameter for the script is p2020_m23.
ksh script:
export SOURCE_CD=$1
export CONFIG_FILE=/user/spanda20/dbconfig.txt
source $CONFIG_FILE
USERNAME=${${SOURCE_CD}_ORACLE_USERNAME} << **This throws error** >>
USERNAME=$p2020_m23_ORACLE_USERNAME <<< **This gives correct result** >>
manual test:
[spanda2 config]$ export SOURCE_CD=p2020_m23
[spanda2 config]$ export m23_ORACLE_USERNAME=sanjeeb
[spanda2 config]$ export USERNAME=${${SOURCE_CD}_ORACLE_USERNAME}
-bash: USERNAME=${${SOURCE_CD}_ORACLE_USERNAME}: bad substitution
USERNAME_REF="${SOURCE_CD}_ORACLE_USERNAME"
USERNAME="${!USERNAME_REF}"
${parameter} -The value of parameter is substituted
so If you want to append the value of 2 variables and assign in other . You should have written in like this
export SOURCE_CD=p2020_m23
export m23_ORACLE_USERNAME=sanjeeb
export USERNAME="${SOURCE_CD}_${ORACLE_USERNAME}"
In ksh you can use variable indirection with typeset -n or nameref.
Simple example:
$ typeset -n that
$ this=word
$ that=this
$ echo $that
word
$ this=nothing
$ echo $that
nothing
The name reference now makes $that return the current value of $this.
Consider a ASCII text file (lets say it contains code of a non-shell scripting language):
Text_File.msh:
spool on to '$LOG_FILE_PATH/logfile.log';
login 'username' 'password';
....
Now if this were a shell script I could run it as $ sh Text_File.msh and the shell would automatically expand the variables.
What I want to do is have shell expand these variables and then create a new file as Text_File_expanded.msh as follows:
Text_File_expanded.msh:
spool on to '/expanded/path/of/the/log/file/../logfile.log';
login 'username' 'password';
....
Consider:
$ a=123
$ echo "$a"
123
So technically this should do the trick:
$ echo "`cat Text_File.msh`" > Text_File_expanded.msh
...but it doesn't work as expected and the output-file while is identical to the source.
So I am unsure how to achieve this.. My goal is make it easier to maintain the directory paths embedded within my non-shell scripts. These scripts cannot contain any UNIX code as it is not compiled by the UNIX shell.
This question has been asked in another thread, and this is the best answer IMO:
export LOG_FILE_PATH=/expanded/path/of/the/log/file/../logfile.log
cat Text_File.msh | envsubst > Text_File_expanded.msh
if on Mac, install gettext first: brew install gettext
see:
Forcing bash to expand variables in a string loaded from a file
This solution is not elegant, but it works. Create a script call shell_expansion.sh:
echo 'cat <<END_OF_TEXT' > temp.sh
cat "$1" >> temp.sh
echo 'END_OF_TEXT' >> temp.sh
bash temp.sh >> "$2"
rm temp.sh
You can then invoke this script as followed:
bash shell_expansion.sh Text_File.msh Text_File_expanded.msh
If you want it in one line (I'm not a bash expert so there may be caveats to this but it works everywhere I've tried it):
when test.txt contains
${line1}
${line2}
then:
>line1=fark
>line2=fork
>value=$(eval "echo \"$(cat test.txt)\"")
>echo "$value"
line1 says fark
line2 says fork
Obviously if you just want to print it you can take out the extra value=$() and echo "$value".
If a Perl solution is ok for you:
Sample file:
$ cat file.sh
spool on to '$HOME/logfile.log';
login 'username' 'password';
Solution:
$ perl -pe 's/\$(\w+)/$ENV{$1}/g' file.sh
spool on to '/home/user/logfile.log';
login 'username' 'password';
One limitation of the above answers is that they both require the variables to be exported to the environment. Here's what i came up with that would allow the variables to be local to the current shell script:
#!/bin/sh
FOO=bar;
FILE=`mktemp`; # Let the shell create a temporary file
trap 'rm -f $FILE' 0 1 2 3 15; # Clean up the temporary file
(
echo 'cat <<END_OF_TEXT'
cat "$#"
echo 'END_OF_TEXT'
) > $FILE
. $FILE
The above example allows the variable $FOO to be substituted in the files named on the command line. I'm sure it can be improved, but this works for me so far.
Thanks to both previous answers for their ideas!
If the variables you want to translate are known and limited in number, you can always do the translation yourself:
sed "s/\$LOG_FILE_PATH/$LOG_FILE_PATH/g" input > output
And also assuming the variable itself is already known
This solution allows you to keep the same formatting in the ouput file
Copy and paste the following lines in your script
cat $1 | while read line
do
eval $line
echo $line
eval echo $line
done | uniq | grep -v '\$'
this will read the file passed as argument line by line, and then process to try and print each line twice:
- once without substitution
- once with substitution of the variables.
then remove the duplicate lines
then remove the lines containing visible variables ($)
Yes eval should be used carefully, but it provided me this simple oneliner for my problem. Below is an example using your filename:
eval "echo \"$(<Text_File.msh)\""
I use printf instead of echo for my own purposes, but that should do the trick. Thank you abyss.7 providing the link that solve my problem. Hope it helps.
Create an ascii file test.txt with the following content:
Try to replace this ${myTestVariable1}
bla bla
....
Now create a file “sub.sed” containing variable names, eg
's,${myTestVariable1},'"${myTestVariable1}"',g;
s,${myTestVariable2},'"${myTestVariable2}"',g;
s,${myTestVariable3},'"${myTestVariable3}"',g;
s,${myTestVariable4},'"${myTestVariable4}"',g'
Open a terminal move to the folder containing test.txt and sub.sed.
Define the value of the varible to be replaced
myTestVariable1=SomeNewText
Now call sed to replace that variable
sed "$(eval echo $(cat sub.sed))" test.txt > test2.txt
The output will be
$cat test2.txt
Try to replace this SomeNewText
bla bla
....
#logfiles.list:
$EAMSROOT/var/log/LinuxOSAgent.log
$EAMSROOT/var/log/PanacesServer.log
$EAMSROOT/var/log/PanacesStrutsGUI.log
#My Program:
cat logfiles.list | while read line
do
eval Eline=$line
echo $Eline
done