Exporting environment variables for local development and heroku deployment - bash

I would like to setup some files for development, staging and production with environment variables, for example:
application_root/development.env
KEY1=value1
KEY2=value2
There would be similar files staging.env and production.env.
I am looking for a couple different bash scripts which would allow the loading of all these variables in either development or staging/production.
In local development I want to effectively run export KEY1=value1 for each line in the file.
For staging/production I will be deploying to Heroku and would like to effectively run heroku config:set -a herokuappname KEY1=value1 for each line in the staging or production.env files.
I know there are some gems designed for doing this but it seems like this might be pretty simple. I also like the flexibility of having the .env files as simple lists of keys and values and not specifically being tied to any language/framework. If I would have to change something about the way these variables need to be loaded it would be a matter of changing the script but not the .env files.

In the simplest form, you can load the key-value pairs into a bash array as follows:
IFS=$'\n' read -d '' -ra nameValuePairs < ./development.env
In Bash v4+, it's even simpler:
readarray -t nameValuePairs < ./development.env
You can then pass the resulting "${nameValuePairs[#]}" array to commands such as export or heroku config:set ...; e.g.:
export "${nameValuePairs[#]}"
Note, however, that the above only works as intended if the input *.env file meets all of the following criteria:
the keys are syntactically valid shell variable names and the lines have the form <key>=<value>, with no whitespace around =
the lines contain no quoting and no leading or trailing whitespace
there are no empty/blank lines or comment lines in the file.
the values are confined to a single line each.
A different approach is needed with files that do not adhere to this strict format; for instance, this related question deals with files that may contain quoted values.
Below is the source code for a bash script named load_env (the .sh suffix is generally not necessary and ambiguous):
You'd invoke it with the *.env file of interest, and it would perform the appropriate action (running heroku config:set … or export) based on the filename.
However, as stated, you must source the script (using source or its effective bash alias, .) in order to create environment variables (export) visible to the current shell.
To prevent obscure failures, the script complains if you pass a development.env file and have invoked the script without sourcing.
Examples:
./load_env ./staging.dev
. ./load_env ./development.dev # !! Note the need to source
load_env source code
#!/usr/bin/env bash
# Helper function that keeps its aux. variables localized.
# Note that the function itself remains defined after sourced invocation, however.
configOrExport() {
local envFile=$1 doConfig=0 doExport=0 appName
case "$(basename "$envFile" '.env')" in
staging)
doConfig=1
# Set the desired app name here.
appName=stagingapp
;;
production)
doConfig=1
# Set the desired app name here.
appName=productionapp
;;
development)
doExport=1
;;
*)
echo "ERROR: Invalid or missing *.env file name: $(basename "$envFile" '.env')" >&2; exit 2
esac
# Make sure the file exists and is readable.
[[ -r "$envFile" ]] || { echo "ERROR: *.env file not found or not readable: $envFile" >&2; exit 2; }
# If variables must be exported, make sure the script is being sourced.
[[ $doExport -eq 1 && $0 == "$BASH_SOURCE" ]] && { echo "ERROR: To define environment variables, you must *source* this script." >&2; exit 2; }
# Read all key-value pairs from the *.env file into an array.
# Note: This assumes that:
# - the keys are syntactically valid shell variable names
# - the lines contain no quoting and no leading or trailing whitespace
# - there are no empty/blank lines or comment lines in the file.
IFS=$'\n' read -d '' -ra nameValuePairs < "$envFile"
# Run configuration command.
(( doConfig )) && { heroku config:set -a "$appName" "${nameValuePairs[#]}" || exit; }
# Export variables (define as environment variables).
(( doExport )) && { export "${nameValuePairs[#]}" || exit; }
}
# Invoke the helper function.
configOrExport "$#"

Related

How to change a bash variable in a configuration file? [duplicate]

I've made a bash script which I run every hour with crontab, and I need to store one variable so that I can access it the next time I run it. The script changes the variable every time it runs, so I can't hardcode it in. Right now I am writing it to a txt file and then reading it back. Is there a better way to do it than this? And the way I am reading the txt file is something I found on here, I don't understand it, and it's kinda clunky. Is there not a built in command for this? Anyway, here's the applicable code, with some of the variables changed to make it easier to read.
while read x; do
var=$x
done < var.txt
# Do some stuff, change var to a new value
echo $var > var.txt
The variable is only a single integer, so the text file feels overkill.
There's no need to use var; x will be in scope for the current shell. Alternately,
read var < var.txt
# do stuff with var
echo $var > var.txt
I recommend using a simple text file to store the variable. However, there is the (highly questionable) option of a self-modifying script. FOR ENTERTAINMENT PURPOSES ONLY!
#!/bin/bash
read val < <( tail -n 1 "$0" )
(( val++ ))
echo "$val"
tmp=$(mktemp /tmp/XXXXXXX)
sed '$s/.*/'$val'/' "$0" > "$tmp"
mv "$tmp" "$0"
exit
0
The key is to have the next-to-last line be the exit command, so nothing after it will execute. The last line is the variable value you want to persist. When the script runs, it reads from its own last line. Before it exits, it uses sed to write a copy of itself toa temp file, with the last line modified with the current value of the persistent value. Then we overwrite the current script with the temp file (assuming we will have permission to do so).
But seriously? Don't do this.
I know this is an old question. But, I still decide to post my solution here in the hope that it might be helpful to others who come here in search of a way to serialize env vars between sessions.
The simple way is just write "var_name=var_value" into a file, say "./environ". And then "source ./envrion" in following sessions. For example:
echo "var1=$var1" > ./environ
A more comprehensive (and elegant?) way which persist all attributes of variables is to make use of "declare -p":
declare -p var1 var2 > ./environ
# NOTE: no '$' before var1, var2
Later on, after "source ./envrion" you can get var1 var2 with all attributes restored in addition to its value. This means it can handle arrays, integers etc.
One caveat for the "declare -p xx", though: if you wrap the "source ./environ" into a function, then all sourced variables are visible within the function only because "declare" by default declares variables as local ones. To circumvent this, you may either "source" out of any function (or in your "main" function) or modify the ./environ to add "-g" after declare (which makes corresponding variable global). For instance:
sed -i 's/^declare\( -g\)*/declare -g/' ./environ
# "\( -g\)?" ensure no duplication of "-g"
1- You can simplify your script, as you only have one variable
var=`cat var.txt`
# Do some stuff, change var to a new value
echo $var > var.txt
2- You can store your variable in the environment:
export var
# Do some stuff, change var to a new value
But you'll need to prompt it . script.ksh (dot at the beggining). But it shouldn't have 'exit' in it and i'm not sure this would work in cron...
Depending on your use case this might be overkill, but if you need to store and keep track of multiple variables (or from multiple scripts) then consider using sqlite which has a command line interface (sqlite3), and which is usually preinstalled ootb on linux/macos systems.
DB='storage.db'
KEY1='eurusd'
VAL1=1.19011
KEY2='gbpeur'
VAL2=1.16829
# create table if not present (ONLY NEEDS TO BE RUN ONCE)
QUERY_CREATE="CREATE TABLE IF NOT EXISTS records (id INTEGER PRIMARY KEY, name TEXT NOT NULL, value NUMERIC NOT NULL);"
sqlite3 "$DB" "$QUERY_CREATE"
# write a key-value pair to database (creates a new row each time)
QUERY_INSERT="INSERT INTO records(name, value) VALUES ('${KEY1}', '${VAL1}');"
sqlite3 "$DB" "$QUERY_INSERT"
# write a key-value pair to database (REPLACE previous value!)
# using 42 as a hard-coded row ID
QUERY_REPLACE="REPLACE INTO records(id, name, value) VALUES (42, '${KEY2}', '${VAL2}');"
sqlite3 "$DB" "$QUERY_REPLACE"
# read value from database
QUERY_SELECT1="SELECT value FROM records WHERE name='${KEY1}';"
QUERY_SELECT2="SELECT value FROM records WHERE name='${KEY2}';"
echo "***** $KEY1 *****"
# store db value in a variable
db_value1=$(sqlite3 "$DB" "$QUERY_SELECT1")
echo $db_value1
## OUTPUT: 1.19011
echo "***** $KEY2 *****"
db_value2=$(sqlite3 "$DB" "$QUERY_SELECT2")
echo $db_value2
## OUTPUT: 1.16829
NOTE: If you do not explicitly pass the row ID then a new row will be added on each script invocation. To always update into the same row use REPLACE INTO with an explicit ID (e.g. 42 as can be seen in the REPLACE INTO... statement). Run the script multiple times to see how the output differs for KEY1 and KEY2.
NOTE2: In this example the values are numeric, if you need to store strings then in CREATE TABLE instead of NUMERIC use TEXT.
And if you want an open-source GUI for visualising the database then DB Browser for SQLite is available for mac/linux/windows (there are dozens more).
To store multiple variables between runs, a solution I considered is to save them under the format my_var=my_value in a separated file.
Then, I include two function to set and retrieve the variables
In the file storing the variables and their values:
Let's call this file context.dat
# Here I store the variables and their values
my_var_x=1
my_var_y=boo
my_var_z=0
In the actual script:
Let's call the file multiple_run.sh
context=./context.dat
function update_variables(){
# update the variable context
source $context
}
function set_variable(){
# store variable
variable=$1 #variable to be set
value=$2 # value to give to the value
# modify the file storing the value
sed -i 's/'${variable}'.*/'${variable}'='${value}'/' $context
}
##################
# Test code
echo var_x
update_variables
echo var_x
# do something
set_variable var_x 2
echo $var_x
This is one approach among other. With such method, you need to create the storing file before and create each line for each variable. Besides, the context.dat is a priori accessible by any other script.
Just discovered this great simple project (a rewritten fork). A simple yet powerful key/value pair store for bash. Looks perfect. Behind the scenes each database is a directory, each key is a file, and the values are in the file.
https://github.com/imyller/kv-sh
Tiny key-value database
Configurable database directory (default: ~/.kv-sh)
Used by importing functions via $ . ./kv-sh
Full database dump/restore
Support for secondary read-only defaults database
. ./kv-sh # import kv-sh functions (use default database directory; see
configuration environment variables for available options)
kvset <key> <value> # assign value to key
kvget <key> # get value of key
kvdel <key> # delete key
kvexists <key> # check if key exists
kvkeys {-l|-d|-a} # list all keys (-l local only, -d default only, -a all (default))
kvlist {-a} # list all key/value pairs (-a all keys, including default)
kvdump {-a} # database dump (-a all keys, including default)
kvimport # database import (overwrite)
kvrestore # database restore (clear and restore)
kvclear # clear database
Defaults database
kv-sh supports secondary read-only defaults database. If enabled, keys-value pairs from default value database are returned if local value is not specified.
Enable defaults database by setting DB_DEFAULTS_DIR:
DB_DIR="/tmp/.kv" DB_DEFAULTS_DIR="/tmp/.kv-default" . ./kv-sh
I ended up doing the following. Would prefer the variables in one file, but this bloats the code slightly. How does this read thing work? You can store multiple variables in a seperate file, say variables.txt, and then have your main program in say main.sh. It might be better to write seperate scripts for loading and saving variables though.
For varibles.txt:
A=0
B=0
C=0
For main.sh:
#!/bin/bash
#reload variables
A=`cat ./variables.txt|grep "A="|cut -d"=" -f2`
B=`cat ./variables.txt|grep "B="|cut -d"=" -f2`
C=`cat ./variables.txt|grep "C="|cut -d"=" -f2`
#print variables
printf "$A\n"
printf "$B\n"
printf "$C\n"
#update variables
A=$((($A+1)))
B=$((($B+2)))
C=$((($C+3)))
#save variables to file
#for A
#remove entry for A
cat ./variables.txt|grep -v "A=">>./tmp.txt
#save entry for A
printf "A=$A\n">>./tmp.txt
#move tmp.txt to variables.txt
mv ./tmp.txt ./variables.txt
#for B
#remove entry for B
cat ./variables.txt|grep -v "B=">>./tmp.txt
#save entry for B
printf "B=$B\n">>./tmp.txt
#move tmp.txt to variables.txt
mv ./tmp.txt ./variables.txt
#for C
#remove entry for C
cat ./variables.txt|grep -v "C=">>./tmp.txt
#save entry for C
printf "C=$C\n">>./tmp.txt
#move tmp.txt to variables.txt
mv ./tmp.txt ./variables.txt

Difficulty using readlink in bash with mix of variables and string to return absolute paths

I have a config script where users can specify paths as variables in the header section. I want them to be able to use absolute paths, relative paths and variables (because this is actually called from another shell script from where they get the values for the variables). At the end of the script all the paths are written to a text file.
The challenge I have is that variables used within some of the paths can change in the middle of the script. I am having difficulties in re-evaluating the path to get the correct output.
### HEADER SECTION ###
DIR_PATH="$VAR1/STRING1"
InputDir_DEFAULT="$DIR_PATH/Input"
### END HEADER ###
...some code
if [[ some condition ]]; then DIR_PATH="$VAR2/STRING2"; fi
...more code
# $InputDir_DEFAULT needs re-evaluating here
InputDir=$(readlink -m $InputDir_DEFAULT)
echo $InputDir >> $FILE
When I do as above and 'some condition' is met, the return of 'echo' is the absolute path for $VAR1/STRING1/Input, whereas what I want it the abs path for $VAR2/STRING2/Input.
Below is an alternative, where I try to stop InputDir_DEFAULT being evaluated until the end by storing itself as a string.
### HEADER SECTION ###
DIR_PATH="$VAR1/STRING1"
InputDir_DEFAULT='$DIR_PATH/Input' #NOTE: "" marks have changed to ''
### END HEADER ###
if [[ some condition ]]; then DIR_PATH="$VAR2/STRING2"; fi
STRING_TMP=$InputDir_DEFAULT
InputDir=$(readlink -m $STRING_TMP)
echo $InputDir >> $FILE
This time 'echo' returns a mix of the evaluated variables and un-evaluated string: $VAR2/STRING2/$DIR_PATH/Input which (for me) looks like /home/ubuntu/STRING2/$DIR_PATH/Input. It's just the $DIR_PATH/ that shouldn't be there.
This feels like it should be relatively straightforward. I'm hoping I'm on the right path and that it's my use of "" and '' that's at fault. But I've tried lots of variations with no success.
When you initially set InputDir_DEFAULT, it is taking the currently set value for ${DIR_PATH}; even if you update ${DIR_PATH} later on, InputDir_DEFAULT will remain what it was set to earlier. To resolve this in your current script, you could set InputDir_DEFAULT again inside the if statement:
InputDir_DEFAULT=${DIR_PATH}/Input
Additionally, in your second attempt the single quoted value setting translates to the literal string value and does not expand to the variable's value:
InputDir_DEFAULT='$DIR_PATH/Input'
I would recommend referring to the "Quoting" section in the GNU Bash manual.
This is the solution I came to in the end. It's a mix of what was suggested by #ThatsWhatSheCoded and some other stuff to ensure that the user doesn't have to redefine variables anywhere other than in the header.
I expect there's a more elegant way of doing this, but this does work.
### HEADER SECTION ###
DIR_PATH_DEFAULT="$VAR1/STRING1"
InputDir_DEFAULT="$DIR_PATH_DEFAULT/Input"
### END HEADER ###
...some code
if [[ some condition ]]; then DIR_PATH="$VAR2/STRING2"; fi
### Checks whether $DIR_PATH_DEFAULT is used in any variables.
### If so and $DIR_PATH is different, will replace string.
### This will be done for all variables in a list.
if [[ ! "$DIR_PATH_DEFAULT" =~ "$DIR_PATH" ]]; then
for i in ${!var[#]}; do
var_def_val=${var[i]}_DEFAULT
STRING_TMP=${!var_def_val}
var_def=${var[$i]}_DEFAULT
if [[ $STRING_TMP == *"$DIR_PATH_DEFAULT"* ]] && [[ ! $var_def == "DIR_PATH_DEFAULT" ]]; then
STRING_TMP="${STRING_TMP/$DIR_PATH_DEFAULT/$DIR_PATH}"
eval "${var_def}=$STRING_TMP"
fi
done
fi
...more code
InputDir=$(readlink -m $InputDir_DEFAULT)

Setting multiple variables to call in a conditional statement

I want to create an environment variable depending on the location and append that environment variable name to log file. For example, I have multiple AWS accounts and depending on the account I want the variable name to identify that account. I would like to run this in a conditional as well instead I writing multiple scripts.
if account is this echo or printf account and set the account name to that variable before executing the code.
aws_bsd='lab'
aws_csd='engineering'
aws_efg='chemistry'
You can get the absolute path of the currently running script using realpath and take the decision based on that.
#!/usr/bin/env bash
_this_script_path="$(realpath "$0")" # script path
_this_dir_path="$(dirname "$_this_script_path")" # scripts' parent directory path
_account_1_path="/home/acc1"
_account_2_path="/home/acc2"
if [[ "$_this_dir_path" == "$_account_1_path" ]]; then
echo "- setting up account 1"
# do account 1 specific stuff
# export VAR1="var1_value"
# ...
elif [[ "$_this_dir_path" == "$_account_2_path" ]]; then
echo "- setting up account 2"
# do account 2 specific stuff
# ...
fi
Note:
I'm not sure whether realpath is in POSIX standard, but I find it is widely available. In case you don't have it, there are other methods to do the same realpath does. But I believe this is the most concise approach.

Bash store variable between multiples runs [duplicate]

I've made a bash script which I run every hour with crontab, and I need to store one variable so that I can access it the next time I run it. The script changes the variable every time it runs, so I can't hardcode it in. Right now I am writing it to a txt file and then reading it back. Is there a better way to do it than this? And the way I am reading the txt file is something I found on here, I don't understand it, and it's kinda clunky. Is there not a built in command for this? Anyway, here's the applicable code, with some of the variables changed to make it easier to read.
while read x; do
var=$x
done < var.txt
# Do some stuff, change var to a new value
echo $var > var.txt
The variable is only a single integer, so the text file feels overkill.
There's no need to use var; x will be in scope for the current shell. Alternately,
read var < var.txt
# do stuff with var
echo $var > var.txt
I recommend using a simple text file to store the variable. However, there is the (highly questionable) option of a self-modifying script. FOR ENTERTAINMENT PURPOSES ONLY!
#!/bin/bash
read val < <( tail -n 1 "$0" )
(( val++ ))
echo "$val"
tmp=$(mktemp /tmp/XXXXXXX)
sed '$s/.*/'$val'/' "$0" > "$tmp"
mv "$tmp" "$0"
exit
0
The key is to have the next-to-last line be the exit command, so nothing after it will execute. The last line is the variable value you want to persist. When the script runs, it reads from its own last line. Before it exits, it uses sed to write a copy of itself toa temp file, with the last line modified with the current value of the persistent value. Then we overwrite the current script with the temp file (assuming we will have permission to do so).
But seriously? Don't do this.
I know this is an old question. But, I still decide to post my solution here in the hope that it might be helpful to others who come here in search of a way to serialize env vars between sessions.
The simple way is just write "var_name=var_value" into a file, say "./environ". And then "source ./envrion" in following sessions. For example:
echo "var1=$var1" > ./environ
A more comprehensive (and elegant?) way which persist all attributes of variables is to make use of "declare -p":
declare -p var1 var2 > ./environ
# NOTE: no '$' before var1, var2
Later on, after "source ./envrion" you can get var1 var2 with all attributes restored in addition to its value. This means it can handle arrays, integers etc.
One caveat for the "declare -p xx", though: if you wrap the "source ./environ" into a function, then all sourced variables are visible within the function only because "declare" by default declares variables as local ones. To circumvent this, you may either "source" out of any function (or in your "main" function) or modify the ./environ to add "-g" after declare (which makes corresponding variable global). For instance:
sed -i 's/^declare\( -g\)*/declare -g/' ./environ
# "\( -g\)?" ensure no duplication of "-g"
1- You can simplify your script, as you only have one variable
var=`cat var.txt`
# Do some stuff, change var to a new value
echo $var > var.txt
2- You can store your variable in the environment:
export var
# Do some stuff, change var to a new value
But you'll need to prompt it . script.ksh (dot at the beggining). But it shouldn't have 'exit' in it and i'm not sure this would work in cron...
Depending on your use case this might be overkill, but if you need to store and keep track of multiple variables (or from multiple scripts) then consider using sqlite which has a command line interface (sqlite3), and which is usually preinstalled ootb on linux/macos systems.
DB='storage.db'
KEY1='eurusd'
VAL1=1.19011
KEY2='gbpeur'
VAL2=1.16829
# create table if not present (ONLY NEEDS TO BE RUN ONCE)
QUERY_CREATE="CREATE TABLE IF NOT EXISTS records (id INTEGER PRIMARY KEY, name TEXT NOT NULL, value NUMERIC NOT NULL);"
sqlite3 "$DB" "$QUERY_CREATE"
# write a key-value pair to database (creates a new row each time)
QUERY_INSERT="INSERT INTO records(name, value) VALUES ('${KEY1}', '${VAL1}');"
sqlite3 "$DB" "$QUERY_INSERT"
# write a key-value pair to database (REPLACE previous value!)
# using 42 as a hard-coded row ID
QUERY_REPLACE="REPLACE INTO records(id, name, value) VALUES (42, '${KEY2}', '${VAL2}');"
sqlite3 "$DB" "$QUERY_REPLACE"
# read value from database
QUERY_SELECT1="SELECT value FROM records WHERE name='${KEY1}';"
QUERY_SELECT2="SELECT value FROM records WHERE name='${KEY2}';"
echo "***** $KEY1 *****"
# store db value in a variable
db_value1=$(sqlite3 "$DB" "$QUERY_SELECT1")
echo $db_value1
## OUTPUT: 1.19011
echo "***** $KEY2 *****"
db_value2=$(sqlite3 "$DB" "$QUERY_SELECT2")
echo $db_value2
## OUTPUT: 1.16829
NOTE: If you do not explicitly pass the row ID then a new row will be added on each script invocation. To always update into the same row use REPLACE INTO with an explicit ID (e.g. 42 as can be seen in the REPLACE INTO... statement). Run the script multiple times to see how the output differs for KEY1 and KEY2.
NOTE2: In this example the values are numeric, if you need to store strings then in CREATE TABLE instead of NUMERIC use TEXT.
And if you want an open-source GUI for visualising the database then DB Browser for SQLite is available for mac/linux/windows (there are dozens more).
To store multiple variables between runs, a solution I considered is to save them under the format my_var=my_value in a separated file.
Then, I include two function to set and retrieve the variables
In the file storing the variables and their values:
Let's call this file context.dat
# Here I store the variables and their values
my_var_x=1
my_var_y=boo
my_var_z=0
In the actual script:
Let's call the file multiple_run.sh
context=./context.dat
function update_variables(){
# update the variable context
source $context
}
function set_variable(){
# store variable
variable=$1 #variable to be set
value=$2 # value to give to the value
# modify the file storing the value
sed -i 's/'${variable}'.*/'${variable}'='${value}'/' $context
}
##################
# Test code
echo var_x
update_variables
echo var_x
# do something
set_variable var_x 2
echo $var_x
This is one approach among other. With such method, you need to create the storing file before and create each line for each variable. Besides, the context.dat is a priori accessible by any other script.
Just discovered this great simple project (a rewritten fork). A simple yet powerful key/value pair store for bash. Looks perfect. Behind the scenes each database is a directory, each key is a file, and the values are in the file.
https://github.com/imyller/kv-sh
Tiny key-value database
Configurable database directory (default: ~/.kv-sh)
Used by importing functions via $ . ./kv-sh
Full database dump/restore
Support for secondary read-only defaults database
. ./kv-sh # import kv-sh functions (use default database directory; see
configuration environment variables for available options)
kvset <key> <value> # assign value to key
kvget <key> # get value of key
kvdel <key> # delete key
kvexists <key> # check if key exists
kvkeys {-l|-d|-a} # list all keys (-l local only, -d default only, -a all (default))
kvlist {-a} # list all key/value pairs (-a all keys, including default)
kvdump {-a} # database dump (-a all keys, including default)
kvimport # database import (overwrite)
kvrestore # database restore (clear and restore)
kvclear # clear database
Defaults database
kv-sh supports secondary read-only defaults database. If enabled, keys-value pairs from default value database are returned if local value is not specified.
Enable defaults database by setting DB_DEFAULTS_DIR:
DB_DIR="/tmp/.kv" DB_DEFAULTS_DIR="/tmp/.kv-default" . ./kv-sh
I ended up doing the following. Would prefer the variables in one file, but this bloats the code slightly. How does this read thing work? You can store multiple variables in a seperate file, say variables.txt, and then have your main program in say main.sh. It might be better to write seperate scripts for loading and saving variables though.
For varibles.txt:
A=0
B=0
C=0
For main.sh:
#!/bin/bash
#reload variables
A=`cat ./variables.txt|grep "A="|cut -d"=" -f2`
B=`cat ./variables.txt|grep "B="|cut -d"=" -f2`
C=`cat ./variables.txt|grep "C="|cut -d"=" -f2`
#print variables
printf "$A\n"
printf "$B\n"
printf "$C\n"
#update variables
A=$((($A+1)))
B=$((($B+2)))
C=$((($C+3)))
#save variables to file
#for A
#remove entry for A
cat ./variables.txt|grep -v "A=">>./tmp.txt
#save entry for A
printf "A=$A\n">>./tmp.txt
#move tmp.txt to variables.txt
mv ./tmp.txt ./variables.txt
#for B
#remove entry for B
cat ./variables.txt|grep -v "B=">>./tmp.txt
#save entry for B
printf "B=$B\n">>./tmp.txt
#move tmp.txt to variables.txt
mv ./tmp.txt ./variables.txt
#for C
#remove entry for C
cat ./variables.txt|grep -v "C=">>./tmp.txt
#save entry for C
printf "C=$C\n">>./tmp.txt
#move tmp.txt to variables.txt
mv ./tmp.txt ./variables.txt

How Can I access bash variables in tcl(expect) script

How Can I access bash variables in tcl(expect) script.
I have bash file say f1.sh which set some variables like
export var1=a1
export var2=a2
These variable I need to use in my expect script .
I tried using this in my script which does not work
system "./f1.sh"
puts "var1 is $::env(var1)"
puts "var2 is $::env(var2)"
But this does not seems to work.
I see that non of the variable from f1.sh are getting set as environment variable.
system "./f1.sh" << # Is this command in my script right ?
How I need to access these bash variables from tcl file.
I would say that this problem is rather general. First I met this problem, when I wanted to initialize Microsoft Visual Studio environment (which is done using .cmd script) in PoserShell. Later I've faced this problem with other scripting languages in any combinations (Bash, Tcl, Python etc.).
Solution provided by Hai Vu is good. It works well, if you know from the beginning, which variables you need. However, if you are going to use script for initialization of some environment it my contains dozens of variables (which you don't even need to know about, but which are needed for normal operation of the environment).
In general, the solution for the problem is following:
Execute script and at the end print ALL environment variables and capture the output.
Match lines of output for the pattern like "variable=value", where is what you want to get.
Set environment variables using facilities of your language.
I do not have ready made solution, but I guess, that something similar to this should work (note, that snippets below was not tested - they are aimed only to give an idea of the solution):
Execute script; print vars and capture the output (argument expanding - {*} - requires Tcl 8.5, here we can go without it, but I prefer to use it):
set bashCommand {bash -c 'myScriptName arg1 arg2 2>&1 >/dev/null && export -p'}
if [catch {*}${bashCommand} output] {
set errMsg "ERROR: Failed to run script."
append errMsg "\n" $output
error $errMsg
}
;# If we get here, output contains the output of "export -p" command
Parse the output of the command:
set vars [dict create]
foreach line [split $output "\n"] {
regex -- {^declare -x ([[:alpha:]_]*)=\"(.*)\"$} $line dummy var val
;# 3. Store var-val pair of set env var.
}
Store var-val pair or set env var. Here several approaches can be used:
3.1. Set Tcl variables and use them like this (depending on context):
set $var $val
or
variable $var $val
3.2. Set environment variable (actually, sub-case of 3.1):
global ::env
set ::env($var) $val
3.3 Set dict or array and use it within your application (or script) without modification of global environment:
set myEnv($var) val ;# set array
dict set myEnvDict $var $val ;# set dict
I'd like to repeat, that this is only the idea of the receipt. And more important, that as most of the modern scripting languages support regexes, this receipt can provide bridge between almost arbitrary pair of languages, but not only Bash<->Tcl
You can use a here-document, like this:
#!/bin/bash
process=ssh
expect <<EOF
spawn $process
...
EOF
Exported variables are only passed from a parent process to it's children, not the other way around. The script f1.sh (actually the bash instance that's running the script) gets it's own copies of var1 and var2 and it doesn't matter if it changes them, the changes are lost when it exits. For variable exporting to work, you would need to start the expect script from the bash script.
In f1.sh, printf what you want to return...
printf '%s\n%s\n' "$var1" "$var2"
...and read it with exec in Tcl:
lassign [split [exec ./f1.sh] \n] var1 var2
Perhaps I did not look hard enough, but I don't see any way to do this. When you execute the bash script, you create a different process. What happens in that process does not propagate back to the current process.
We can work-around this issue by doing the following (thanks to potrzebie for the idea):
Duplicate the bash script to a temp script
Append to the temp script some commands at the end to echo a marker, and a list of variables and their values
Execute the temp script and parse the output
The result is a list of alternating names and values. We use this list to set the environment variables for our process.
#!/usr/bin/env tclsh
package require fileutil
# Execute a bash script and extract some environment variables
proc getBashVar {bashScript varsList} {
# Duplicate the bash script to a temp script
set tempScriptName [fileutil::tempfile getBashVar]
file copy -force $bashScript $tempScriptName
# Append a marker to the end of the script. We need this marker to
# identify where in the output to begin extracting the variables.
# After that append the list of specified varibles and their values.
set f [open $tempScriptName a]
set marker "#XXX-MARKER"
puts $f "\necho \\$marker"
foreach var $varsList {
puts $f "echo $var \\\"$$var\\\" "
}
close $f
# Execute the temp script and parse the output
set scriptOutput [exec bash $tempScriptName]
append pattern $marker {\s*(.*)}
regexp $pattern $scriptOutput all vars
# Set the environment
array set ::env $vars
# Finally, delete the temp script to clean up
file delete $tempScriptName
}
# Test
getBashVar f1.sh {var1 var2}
puts "var1 = $::env(var1)"
puts "var2 = $::env(var2)"

Resources