source a file containing environment variables including "dash" character in bash? - bash

I'm using bash and have a file called x.config that contains the following:
MY_VAR=Something1
ANOTHER=Something2
To load these as environment variables I just use source:
$ source x.config
But this doesn't work if MY_VAR is called MY-VAR:
MY-VAR=Something1
ANOTHER=Something2
If I do the same thing I get:
x.config:1: command not found: MY-VAR=Something1
I've tried escaping - and a lot of other things but I'm stuck. Does anyone know a workaround for this?

A pure bash workaround that might work for you is to re-run the script using env to set the environment. Add this to the beginning of your script.
if [[ ! -v myscript_env_set ]]; then
export myscript_env_set=1
readarray -t newenv < x.config
exec env "${newenv[#]}" "$0" "$#"
fi
# rest of the script here
This assumes that x.config doesn't contain anything except variable assignments. If myscript_env_set is not in the current environment, put it there so that the next invocation skips this block. Then read the assignments into an array to pass to env. Using exec replaces the current process with another invocation of the script, but with the desired variables in the environment.

A dash (-) in an environment variable is not portable, and as you noticed, will cause a lot of problems. You can't set these from bash. Fix the application you want to invoke.
That being said, if you can't change the target app, you can do this from python:
#!/usr/bin/python
import os
with open('x.config') as f:
for line in f:
name, value = line.strip().split('=')
os.environ[name] = value
os.system('/path/to/your/app')
This is a very simplistic config reader, and for a more complex syntax you might want to use ConfigParser.

Related

Export environment variables from Makefile to userland environment

I'm looking how to export from a Makefile environment variables to be exposed in the userland environment so exporting these variables from the Makefile should be accessible from the user shell.
I have tried make's export but as I understand and have tried does not export to outside of Makefile.
The idea of this is to populate Docker Compose environment variables in a elegant way and have these variables ready to use in the user shell also.
This is a fragment of what I've tried with make's export:
include docker.env
export $(shell sed -n '/=/p' docker.env)
SHELL := /bin/bash
run:
#docker-compose -f my-service.yml up -d
According with ArchWiki, each process of Bash...
Each process stores their environment in the /proc/$PID/environ file.
so once Make execute a source, export or any other command to set a new environment variable it will be applied only for that process.
As workaround I've written in the bash startup file so the variables will be in the global environment as soon as a new bash shell is loaded:
SHELL := /bin/bash
RC := ~/.bashrc
ENV := $(shell sed -n '/=/p' docker.env)
test:
#$(foreach e,$(ENV),echo $(e) >> $(RC);) \
EDIT completely reworked the answer after the OP explained in a comment that he wants the environment variables to be defined for any user shell.
If your goal is to have a set of environment variables defined for any user shell (I assume this means interactive shell), you can simply add these definitions to the shell's startup file (.bashrc for bash). From GNU make manual:
Variables in make can come from the environment in which make is run.
Every environment variable that make sees when it starts up is
transformed into a make variable with the same name and value.
However, an explicit assignment in the makefile, or with a command
argument, overrides the environment. (If the ā€˜-eā€™ flag is specified,
then values from the environment override assignments in the makefile.
See Summary of Options. But this is not recommended practice.)
Example:
$ cat .bashrc
...
export FOOBAR=foobar
export BARFOO="bar foo"
...
$ cat Makefile
all:
#printf '$$(FOOBAR)=%s\n' '$(FOOBAR)'
#printf 'FOOBAR='; printenv FOOBAR
#printf '$$(BARFOO)=%s\n' '$(BARFOO)'
#printf 'BARFOO='; printenv BARFOO
$ make
$(FOOBAR)=foobar
FOOBAR=foobar
$(BARFOO)=bar foo
BARFOO=bar foo
If you want to keep these definitions separate, you can just source the file from .bashrc:
$ cat docker.env
export FOOBAR=foobar
export BARFOO="bar foo"
$ cat .bashrc
...
source <some-path>/docker.env
...
And finally, if you don't want to add the export bash command to your file, you can parse the file in your .bashrc:
$ cat docker.env
FOOBAR=foobar
BARFOO="bar foo"
$ cat .bashrc
...
while read -r line; do
eval "export $$line"
done < <(sed -n '/=/p' <some-path>/docker.env)
...
Of course, there are some constraints for the syntax of your docker.env file (no unquoted special characters, no spaces in variable names, properly quoted values...) If your syntax is not bash-compatible it is time to ask another question about parsing this specific syntax and converting it into bash-compatible syntax.
Make cannot change the calling shell's environment without its cooperation. Of course, if you are in control, you can make the calling shell cooperate.
In broad terms, you could replace the make command with a shell alias or function which runs the real make and also sets the environment variables from the result. I will proceed to describe in more detail one way to implement this.
Whether you call this alias or function of yours make or e.g. compose is up to you really. To wrap the real make is marginally harder -- inside the function, you need to say command make, because just make would cause an infinite loop with the alias or function calling itself recursively -- so I will demonstrate this. Let's define a function (aliases suck);
make () {
# run the real make, break out on failure
command make "$#" || return
# if there is no env for us to load, we are done
test -f ./docker.env || return 0
# still here? load it
. ./docker.env
}
If you want even stricter control, maybe define a variable in the function and check inside the Makefile that the variable is set.
$(ifneq '${_composing}','function_make')
$(error Need to use the wrapper function to call make)
$(endif)
The error message is rather bewildering if you haven't read this discussion, so maybe it needs to be improved, and/or documented in a README or something. You would change the make line in the function above into
_composing='function_make' \
command make "$#" || return
The syntax var=value cmd args sets the variable var to the string value just for the duration of running the command line cmd args; it then returns to its previous state (unset, or set to its previous value).
For this particular construction, the name of the variable just needs to be reasonably unique and transparent to a curious human reader; and the value is also just a reasonably unique and reasonably transparent string which the function and the Makefile need to agree on.
Depending on what you end up storing in the environment, this could introduce complications if you need this mechanism for multiple Makefiles. Running it in directory a and then switching to a similar directory b will appear to work, but uses the a things where the poor puny human would expect the b things. (If the variables you set contain paths, relative paths fix this scenario, but complicate others.)
Extending this to a model similar to Ruby's rvm or Python's virtualenv might be worth exploring; they typically add an indicator to the shell prompt to remind you which environment is currently active, and have some (very modest) safeguards in place to warn you when your current directory and the environment disagree.
Another wart: Hardcoding make to always load docker.env is likely to produce unwelcome surprises one day. Perhaps hardcode a different file name which is specific to this hook - say, .compose_post_make_hook? It can then in turn contain something like
. ./docker.env
in this particular directory.

Is there a good way to preload or include a script prior to executing another script?

I am looking to execute a script but have it include another script before it executes. The problem is, the included script would be generated and the executed script would be unmodifiable. One solution I came up with, was to actually reverse the include, by having the include script as a wrapper, calling set to set the arguments for the executed script and then dotting/sourcing it. E.g.
#!/bin/bash
# Generated wrapper or include script.
: Performing some setup...
target_script=$1 ; shift
set -- "$#"
. "$target_script"
Where target_script is the script I actually want to run, importing settings from the wrapper.
However, the potential problem I face is that callers of the target script or even the target script itself may be expecting $0 to be set to the path of it's location on the file system. But because this wrapper approach overrides $0, the value of $0 may be unexpected and could produce undefined behaviour.
Is there another way to perform what is in effect, an LD_PRELOAD but in the scripted form, through bash without interfering with its runtime parameters?
I have looked at --init-file or --rcfile, but these only seem to be included for interactive shells.
Forcing interactive mode does seem to allow me to specify --rcfile:
$ bash --rcfile /tmp/x-include.sh -i /tmp/xx.sh
include_script: $0=bash, $BASH_SOURCE=/tmp/x-include.sh
target_script: $0=/tmp/xx.sh, $BASH_SOURCE=/tmp/xx.sh
Content of the x-include.sh script:
#!/bin/bash
echo "include_script: \$0=$0, \$BASH_SOURCE=$BASH_SOURCE"
Content of the xx.sh script:
#!/bin/bash
echo "target_script: \$0=$0, \$BASH_SOURCE=$BASH_SOURCE"
From the bash documentation:
When bash is started non-interactively, to run a shell script, for example, it looks for the variable BASH_ENV in
the environment, expands its value if it appears there, and uses the expanded value as the name of a file to read
and execute. Bash behaves as if the following command were executed:
if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi
but the value of the PATH variable is not used to search for the file name.
So that settles it then:
BASH_ENV=/tmp/x-include.sh /bin/bash /tmp/xx.sh

Defining common variables across multiple scripts?

I have a number of Bash and Perl scripts which are unrelated in functionality, but are related in that they work within the same project.
The fact that they work in the same project means that I commonly specify the same directories, the same project specific commands, the same keywords at the top of every script.
Currently, this has not bitten me, but I understand that it would be easier to have all of these values in one place, then if something changes I can change a value once and have the various scripts pick up on those changes.
The question is - how is best to declare these values? A single Perl script that is 'required' in each script would require less changes to the Perl scripts, though doesn't provide a solution to the Bash script. A configuration file using a "key=value" format would perhaps be more universal, but requires each script to parse the configuration and has the potential to introduce issues. Is there a better alternative? Using environmental variables? Or a Bash specific way that Perl can easily execute and interpret?
When you run a shell script, it's done in a sub-shell so it cannot affect the parent shell's environment. So when you declare a variable as key=value its scope is limited to the sub-shell context. You want to source the script by doing:
. ./myscript.sh
This executes it in the context of the current shell, not as a sub shell.
From the bash man page:
. filename [arguments]
source filename [arguments]
Read and execute commands from filename in the current shell environment and return the exit status of the last command executed from filename.
If filename does not contain a slash, file names in PATH are used to find the directory containing filename.
Also you can use the export command to create a global environment variable. export governs which variables will be available to new processes, so if you say
FOO=1
export BAR=2
./myscript2.sh
then $BAR will be available in the environment of myscript2.sh, but $FOO will not.
Define environments variables :
user level : in your ~/.profile or ~/.bash_profile or ~/.bash_login or ~/.bashrc
system level : in /etc/profile or /etc/bash.bashrc or /etc/environment
For example add tow lines foreach variable :
FOO=myvalue
export FOO
To read this variable in bash script :
#! /bin/bash
echo $FOO
in perl script :
#! /bin/perl
print $ENV{'FOO'};
You could also source another file, so you do not create extra env variables, that may lead to unexpected behaviours.
source_of_truth.sh:
FOO="bar"
scritp1.sh
#!/usr/bin/env bash
source source_of_truth.sh
echo ${FOO}
# ... doing something
scritp2.sh
#!/usr/bin/env bash
source source_of_truth.sh
echo ${FOO}
# ... doing something else

How to export dot separated environment variables

Execution of
user#EWD-MacBook-Pro:~$ export property.name=property.value
Gives me
-bash: export: `property.name=property.value': not a valid identifier
Is it possible to have system properties with dot inside? If so how do that?
As others have said, bash doesn't allow it so you'll have to use your favourite scripting language to do it. For example, in Perl:
perl -e '$ENV{"property.name"} = "property.value"; system "bash"'
This will fire up a subshell bash with the property.name environment variable set, but you still can't access that environment variable from bash (although your program will be able to see it).
Edit: #MarkEdgar commented that the env command will work too:
env 'property.name=property.value' bash # start a subshell, or
env 'property.name=property.value' command arg1 arg2 ... # Run your command
As usual, you only require quotes if you need to protect special characters from the shell or want to include spaces in the property name or value.
I spent better part of this afternoon trying to figure out how to access some property set by Jenkins (to pass a job parameters jenkins uses property format with a dot) - this was a good hint from Adrian and yes it works for reading properties in the script too. I was at a loss as to what to do but then I tried:
var=`perl -e 'print $ENV{"property.name"};print "\n";'`
This worked quite well actually. But of course that works in a shell that was started with the property set in the environment already i.e. in Adrian's example this could work in a script started from bash instance invoked in perl example he provided. It would not if this perl sniplet was put in the same shell only directly after his perl example.
At least I learnt something this afternoon so not all this time is a waste.
If you export those properties to run an application, some programs can support setting system property as an option, and allow . in the property name.
In Java world, most of tools support setting system property by -D option, e.g. you can set system property with dot like this -Dproperty.name=property.value.
Bash only permits '_' and alpha numeric characters in variable names. The '.' isn't permitted.
http://tldp.org/LDP/abs/html/gotchas.html

When to use brackets when exporting environment variables in bash?

I've been trying to figure out what is the purpose of brackets in the bash environment variables. For example, in the below actual example of code, why are some of the definitions using a {} around the PATH, for example, export ...=.../${PATH}. Note also that some of the definitions are different: some use {$ECLIPSE_DIR} with the $ within the brackets; some use ${PATH} with the $ outside of the brackets, and some omit brackets altogether. This code generally works, although sometimes errors like the one shown at the bottom are shown (they appear to be transient), and I'm not sure why such errors only show up sometimes and not others.
What are the common practices concerning ways to include bash environment variables, when should brackets be used, and what is the difference between putting the $ inside and outside of brackets? Also, why do some lines have an "export" before the variable name, and some do not? What is the difference here?
# ECLIPSE
ECLIPSE_DIR=$HOME/eclipse
PATH=${PATH}:{$ECLIPSE_DIR}
# ANT
ANT_HOME=/usr/bin/ant
PATH=${ANT_HOME}/bin:${PATH}
export ANT_HOME PATH
# GRADLE
export GRADLE_HOME=/usr/local/gradle
export PATH=$GRADLE_HOME/bin:$PATH</code>
-bash: export: `/usr/bin/ant/bin:/usr/local/bin:{/Users/me/eclipse}:/usr/bin/scala-2.9.0.1/bin:/usr/local/mysql/bin:/usr/local/bin:{/Users/me/eclipse}': not a valid identifier
The braces are usually used for clarity, but a practical use is breaking up text from variable names. Say I had the following:
$ word="ello"
$ echo "h$word"
hello
$ echo "y$wordw" # bash tries to find the var wordw, and replaces with a blank
y
$ echo "y${word}w"
yellow
Variable names are automatically separated by most punctuation (notably . or /).
echo "$word/$word.$word"
ello/ello.ello
Looking at that error you presented, {$ECLIPSE_DIR} gets the variable expanded and then surrounded with literal open and close braces. I think the solution should be changing it to ${ECLIPSE_DIR}
In response to the export question, export is used to make a variable accessible to the shell that called this script. Any variable set up in a script does not exist once the script is finished unless it is exported. Hence, if you want your PATH to change after running that script, export PATH will have to be called before the script is over.
Braces are used with bash variables to disambiguate between variables. For example, consider this:
VAR=this
echo $VAR_and_that
echo ${VAR}_and_that
The first echo prints nothing, since bash thinks you are trying to echo out the var this_and_that which of course doesn't exist. The second echo doesn't have this problem and outputs this_and_that, since bash knows to expand out the VAR variable due to the braces.

Resources