bash—Better way to store variable between runs? - bash

I've made a bash script which I run every hour with crontab, and I need to store one variable so that I can access it the next time I run it. The script changes the variable every time it runs, so I can't hardcode it in. Right now I am writing it to a txt file and then reading it back. Is there a better way to do it than this? And the way I am reading the txt file is something I found on here, I don't understand it, and it's kinda clunky. Is there not a built in command for this? Anyway, here's the applicable code, with some of the variables changed to make it easier to read.
while read x; do
var=$x
done < var.txt
# Do some stuff, change var to a new value
echo $var > var.txt
The variable is only a single integer, so the text file feels overkill.

There's no need to use var; x will be in scope for the current shell. Alternately,
read var < var.txt
# do stuff with var
echo $var > var.txt
I recommend using a simple text file to store the variable. However, there is the (highly questionable) option of a self-modifying script. FOR ENTERTAINMENT PURPOSES ONLY!
#!/bin/bash
read val < <( tail -n 1 "$0" )
(( val++ ))
echo "$val"
tmp=$(mktemp /tmp/XXXXXXX)
sed '$s/.*/'$val'/' "$0" > "$tmp"
mv "$tmp" "$0"
exit
0
The key is to have the next-to-last line be the exit command, so nothing after it will execute. The last line is the variable value you want to persist. When the script runs, it reads from its own last line. Before it exits, it uses sed to write a copy of itself toa temp file, with the last line modified with the current value of the persistent value. Then we overwrite the current script with the temp file (assuming we will have permission to do so).
But seriously? Don't do this.

I know this is an old question. But, I still decide to post my solution here in the hope that it might be helpful to others who come here in search of a way to serialize env vars between sessions.
The simple way is just write "var_name=var_value" into a file, say "./environ". And then "source ./envrion" in following sessions. For example:
echo "var1=$var1" > ./environ
A more comprehensive (and elegant?) way which persist all attributes of variables is to make use of "declare -p":
declare -p var1 var2 > ./environ
# NOTE: no '$' before var1, var2
Later on, after "source ./envrion" you can get var1 var2 with all attributes restored in addition to its value. This means it can handle arrays, integers etc.
One caveat for the "declare -p xx", though: if you wrap the "source ./environ" into a function, then all sourced variables are visible within the function only because "declare" by default declares variables as local ones. To circumvent this, you may either "source" out of any function (or in your "main" function) or modify the ./environ to add "-g" after declare (which makes corresponding variable global). For instance:
sed -i 's/^declare\( -g\)*/declare -g/' ./environ
# "\( -g\)?" ensure no duplication of "-g"

1- You can simplify your script, as you only have one variable
var=`cat var.txt`
# Do some stuff, change var to a new value
echo $var > var.txt
2- You can store your variable in the environment:
export var
# Do some stuff, change var to a new value
But you'll need to prompt it . script.ksh (dot at the beggining). But it shouldn't have 'exit' in it and i'm not sure this would work in cron...

Depending on your use case this might be overkill, but if you need to store and keep track of multiple variables (or from multiple scripts) then consider using sqlite which has a command line interface (sqlite3), and which is usually preinstalled ootb on linux/macos systems.
DB='storage.db'
KEY1='eurusd'
VAL1=1.19011
KEY2='gbpeur'
VAL2=1.16829
# create table if not present (ONLY NEEDS TO BE RUN ONCE)
QUERY_CREATE="CREATE TABLE IF NOT EXISTS records (id INTEGER PRIMARY KEY, name TEXT NOT NULL, value NUMERIC NOT NULL);"
sqlite3 "$DB" "$QUERY_CREATE"
# write a key-value pair to database (creates a new row each time)
QUERY_INSERT="INSERT INTO records(name, value) VALUES ('${KEY1}', '${VAL1}');"
sqlite3 "$DB" "$QUERY_INSERT"
# write a key-value pair to database (REPLACE previous value!)
# using 42 as a hard-coded row ID
QUERY_REPLACE="REPLACE INTO records(id, name, value) VALUES (42, '${KEY2}', '${VAL2}');"
sqlite3 "$DB" "$QUERY_REPLACE"
# read value from database
QUERY_SELECT1="SELECT value FROM records WHERE name='${KEY1}';"
QUERY_SELECT2="SELECT value FROM records WHERE name='${KEY2}';"
echo "***** $KEY1 *****"
# store db value in a variable
db_value1=$(sqlite3 "$DB" "$QUERY_SELECT1")
echo $db_value1
## OUTPUT: 1.19011
echo "***** $KEY2 *****"
db_value2=$(sqlite3 "$DB" "$QUERY_SELECT2")
echo $db_value2
## OUTPUT: 1.16829
NOTE: If you do not explicitly pass the row ID then a new row will be added on each script invocation. To always update into the same row use REPLACE INTO with an explicit ID (e.g. 42 as can be seen in the REPLACE INTO... statement). Run the script multiple times to see how the output differs for KEY1 and KEY2.
NOTE2: In this example the values are numeric, if you need to store strings then in CREATE TABLE instead of NUMERIC use TEXT.
And if you want an open-source GUI for visualising the database then DB Browser for SQLite is available for mac/linux/windows (there are dozens more).

To store multiple variables between runs, a solution I considered is to save them under the format my_var=my_value in a separated file.
Then, I include two function to set and retrieve the variables
In the file storing the variables and their values:
Let's call this file context.dat
# Here I store the variables and their values
my_var_x=1
my_var_y=boo
my_var_z=0
In the actual script:
Let's call the file multiple_run.sh
context=./context.dat
function update_variables(){
# update the variable context
source $context
}
function set_variable(){
# store variable
variable=$1 #variable to be set
value=$2 # value to give to the value
# modify the file storing the value
sed -i 's/'${variable}'.*/'${variable}'='${value}'/' $context
}
##################
# Test code
echo var_x
update_variables
echo var_x
# do something
set_variable var_x 2
echo $var_x
This is one approach among other. With such method, you need to create the storing file before and create each line for each variable. Besides, the context.dat is a priori accessible by any other script.

Just discovered this great simple project (a rewritten fork). A simple yet powerful key/value pair store for bash. Looks perfect. Behind the scenes each database is a directory, each key is a file, and the values are in the file.
https://github.com/imyller/kv-sh
Tiny key-value database
Configurable database directory (default: ~/.kv-sh)
Used by importing functions via $ . ./kv-sh
Full database dump/restore
Support for secondary read-only defaults database
. ./kv-sh # import kv-sh functions (use default database directory; see
configuration environment variables for available options)
kvset <key> <value> # assign value to key
kvget <key> # get value of key
kvdel <key> # delete key
kvexists <key> # check if key exists
kvkeys {-l|-d|-a} # list all keys (-l local only, -d default only, -a all (default))
kvlist {-a} # list all key/value pairs (-a all keys, including default)
kvdump {-a} # database dump (-a all keys, including default)
kvimport # database import (overwrite)
kvrestore # database restore (clear and restore)
kvclear # clear database
Defaults database
kv-sh supports secondary read-only defaults database. If enabled, keys-value pairs from default value database are returned if local value is not specified.
Enable defaults database by setting DB_DEFAULTS_DIR:
DB_DIR="/tmp/.kv" DB_DEFAULTS_DIR="/tmp/.kv-default" . ./kv-sh

I ended up doing the following. Would prefer the variables in one file, but this bloats the code slightly. How does this read thing work? You can store multiple variables in a seperate file, say variables.txt, and then have your main program in say main.sh. It might be better to write seperate scripts for loading and saving variables though.
For varibles.txt:
A=0
B=0
C=0
For main.sh:
#!/bin/bash
#reload variables
A=`cat ./variables.txt|grep "A="|cut -d"=" -f2`
B=`cat ./variables.txt|grep "B="|cut -d"=" -f2`
C=`cat ./variables.txt|grep "C="|cut -d"=" -f2`
#print variables
printf "$A\n"
printf "$B\n"
printf "$C\n"
#update variables
A=$((($A+1)))
B=$((($B+2)))
C=$((($C+3)))
#save variables to file
#for A
#remove entry for A
cat ./variables.txt|grep -v "A=">>./tmp.txt
#save entry for A
printf "A=$A\n">>./tmp.txt
#move tmp.txt to variables.txt
mv ./tmp.txt ./variables.txt
#for B
#remove entry for B
cat ./variables.txt|grep -v "B=">>./tmp.txt
#save entry for B
printf "B=$B\n">>./tmp.txt
#move tmp.txt to variables.txt
mv ./tmp.txt ./variables.txt
#for C
#remove entry for C
cat ./variables.txt|grep -v "C=">>./tmp.txt
#save entry for C
printf "C=$C\n">>./tmp.txt
#move tmp.txt to variables.txt
mv ./tmp.txt ./variables.txt

Related

Use a set of variables that start with the same string in bash

I know something like this is possible with DOS but I am not sure how to do it within bash.
I am writing a script that takes some configuration data: source, name, and destination. There will be a variable number of these in the configuration. I need to iterate over each set.
So, for example:
#!/bin/bash
FOLDER_1_SOURCE="/path/one"
FOLDER_1_NAME="one"
FOLDER_1_DESTINATION="one"
FOLDER_2_SOURCE="/path/two two"
FOLDER_2_NAME="two"
FOLDER_2_DESTINATION="here"
FOLDER_3_SOURCE="/something/random"
FOLDER_3_NAME="bravo"
FOLDER_3_DESTINATION="there"
FOLDER_..._SOURCE="/something/random"
FOLDER_..._NAME="bravo"
FOLDER_..._DESTINATION=""
FOLDER_X_SOURCE="/something/random"
FOLDER_X_NAME="bravo"
FOLDER_X_DESTINATION=""
Then I want to iterate over each set and get the SOURCE and NAME values for each set.
I am not stuck on this format. I just don't know how else to do this. The end goal is that I have 1 or more set of variables with source, name, and destination and then I need to iterate over them.
The answer to this type of question is nearly always "use arrays".
declare -a folder_source folder_name folder_dest
folder_source[1]="/path/one"
folder_name[1]="one"
folder_dest[1]="one"
folder_source[2]="/path/two two"
folder_name[2]="two"
folder_dest[2]="here"
folder_source[3]="/something/random"
folder_name[3]="bravo"
folder_dest[3]="there"
folder_source[4]="/something/random"
folder_name[4]="bravo"
folder_dest[4]=""
for((i=1; i<=${#folder_source[#]}; ++i)); do
echo "$i source:" "${folder_source[$i]}"
echo "$i name:" "${folder_name[$i]}"
echo "$i destination:" "${folder_dest[$i]}"
done
Demo: https://ideone.com/gZn0wH
Bash array indices are zero-based, but we just leave the zeroth slot unused here for convenience.
Tangentially, avoid upper case for your private variables.
AFIK bash does not have a facility to list all variables. A workaround - which also would mimic what is going on in DOS - is to use environment variables and restrict your search to those. In this case, you could do a
printenv|grep ^FOLDER||cut -d = -f 1
This is the same as doing in Windows CMD shell a
SET FOLDER

How to change a bash variable in a configuration file? [duplicate]

I've made a bash script which I run every hour with crontab, and I need to store one variable so that I can access it the next time I run it. The script changes the variable every time it runs, so I can't hardcode it in. Right now I am writing it to a txt file and then reading it back. Is there a better way to do it than this? And the way I am reading the txt file is something I found on here, I don't understand it, and it's kinda clunky. Is there not a built in command for this? Anyway, here's the applicable code, with some of the variables changed to make it easier to read.
while read x; do
var=$x
done < var.txt
# Do some stuff, change var to a new value
echo $var > var.txt
The variable is only a single integer, so the text file feels overkill.
There's no need to use var; x will be in scope for the current shell. Alternately,
read var < var.txt
# do stuff with var
echo $var > var.txt
I recommend using a simple text file to store the variable. However, there is the (highly questionable) option of a self-modifying script. FOR ENTERTAINMENT PURPOSES ONLY!
#!/bin/bash
read val < <( tail -n 1 "$0" )
(( val++ ))
echo "$val"
tmp=$(mktemp /tmp/XXXXXXX)
sed '$s/.*/'$val'/' "$0" > "$tmp"
mv "$tmp" "$0"
exit
0
The key is to have the next-to-last line be the exit command, so nothing after it will execute. The last line is the variable value you want to persist. When the script runs, it reads from its own last line. Before it exits, it uses sed to write a copy of itself toa temp file, with the last line modified with the current value of the persistent value. Then we overwrite the current script with the temp file (assuming we will have permission to do so).
But seriously? Don't do this.
I know this is an old question. But, I still decide to post my solution here in the hope that it might be helpful to others who come here in search of a way to serialize env vars between sessions.
The simple way is just write "var_name=var_value" into a file, say "./environ". And then "source ./envrion" in following sessions. For example:
echo "var1=$var1" > ./environ
A more comprehensive (and elegant?) way which persist all attributes of variables is to make use of "declare -p":
declare -p var1 var2 > ./environ
# NOTE: no '$' before var1, var2
Later on, after "source ./envrion" you can get var1 var2 with all attributes restored in addition to its value. This means it can handle arrays, integers etc.
One caveat for the "declare -p xx", though: if you wrap the "source ./environ" into a function, then all sourced variables are visible within the function only because "declare" by default declares variables as local ones. To circumvent this, you may either "source" out of any function (or in your "main" function) or modify the ./environ to add "-g" after declare (which makes corresponding variable global). For instance:
sed -i 's/^declare\( -g\)*/declare -g/' ./environ
# "\( -g\)?" ensure no duplication of "-g"
1- You can simplify your script, as you only have one variable
var=`cat var.txt`
# Do some stuff, change var to a new value
echo $var > var.txt
2- You can store your variable in the environment:
export var
# Do some stuff, change var to a new value
But you'll need to prompt it . script.ksh (dot at the beggining). But it shouldn't have 'exit' in it and i'm not sure this would work in cron...
Depending on your use case this might be overkill, but if you need to store and keep track of multiple variables (or from multiple scripts) then consider using sqlite which has a command line interface (sqlite3), and which is usually preinstalled ootb on linux/macos systems.
DB='storage.db'
KEY1='eurusd'
VAL1=1.19011
KEY2='gbpeur'
VAL2=1.16829
# create table if not present (ONLY NEEDS TO BE RUN ONCE)
QUERY_CREATE="CREATE TABLE IF NOT EXISTS records (id INTEGER PRIMARY KEY, name TEXT NOT NULL, value NUMERIC NOT NULL);"
sqlite3 "$DB" "$QUERY_CREATE"
# write a key-value pair to database (creates a new row each time)
QUERY_INSERT="INSERT INTO records(name, value) VALUES ('${KEY1}', '${VAL1}');"
sqlite3 "$DB" "$QUERY_INSERT"
# write a key-value pair to database (REPLACE previous value!)
# using 42 as a hard-coded row ID
QUERY_REPLACE="REPLACE INTO records(id, name, value) VALUES (42, '${KEY2}', '${VAL2}');"
sqlite3 "$DB" "$QUERY_REPLACE"
# read value from database
QUERY_SELECT1="SELECT value FROM records WHERE name='${KEY1}';"
QUERY_SELECT2="SELECT value FROM records WHERE name='${KEY2}';"
echo "***** $KEY1 *****"
# store db value in a variable
db_value1=$(sqlite3 "$DB" "$QUERY_SELECT1")
echo $db_value1
## OUTPUT: 1.19011
echo "***** $KEY2 *****"
db_value2=$(sqlite3 "$DB" "$QUERY_SELECT2")
echo $db_value2
## OUTPUT: 1.16829
NOTE: If you do not explicitly pass the row ID then a new row will be added on each script invocation. To always update into the same row use REPLACE INTO with an explicit ID (e.g. 42 as can be seen in the REPLACE INTO... statement). Run the script multiple times to see how the output differs for KEY1 and KEY2.
NOTE2: In this example the values are numeric, if you need to store strings then in CREATE TABLE instead of NUMERIC use TEXT.
And if you want an open-source GUI for visualising the database then DB Browser for SQLite is available for mac/linux/windows (there are dozens more).
To store multiple variables between runs, a solution I considered is to save them under the format my_var=my_value in a separated file.
Then, I include two function to set and retrieve the variables
In the file storing the variables and their values:
Let's call this file context.dat
# Here I store the variables and their values
my_var_x=1
my_var_y=boo
my_var_z=0
In the actual script:
Let's call the file multiple_run.sh
context=./context.dat
function update_variables(){
# update the variable context
source $context
}
function set_variable(){
# store variable
variable=$1 #variable to be set
value=$2 # value to give to the value
# modify the file storing the value
sed -i 's/'${variable}'.*/'${variable}'='${value}'/' $context
}
##################
# Test code
echo var_x
update_variables
echo var_x
# do something
set_variable var_x 2
echo $var_x
This is one approach among other. With such method, you need to create the storing file before and create each line for each variable. Besides, the context.dat is a priori accessible by any other script.
Just discovered this great simple project (a rewritten fork). A simple yet powerful key/value pair store for bash. Looks perfect. Behind the scenes each database is a directory, each key is a file, and the values are in the file.
https://github.com/imyller/kv-sh
Tiny key-value database
Configurable database directory (default: ~/.kv-sh)
Used by importing functions via $ . ./kv-sh
Full database dump/restore
Support for secondary read-only defaults database
. ./kv-sh # import kv-sh functions (use default database directory; see
configuration environment variables for available options)
kvset <key> <value> # assign value to key
kvget <key> # get value of key
kvdel <key> # delete key
kvexists <key> # check if key exists
kvkeys {-l|-d|-a} # list all keys (-l local only, -d default only, -a all (default))
kvlist {-a} # list all key/value pairs (-a all keys, including default)
kvdump {-a} # database dump (-a all keys, including default)
kvimport # database import (overwrite)
kvrestore # database restore (clear and restore)
kvclear # clear database
Defaults database
kv-sh supports secondary read-only defaults database. If enabled, keys-value pairs from default value database are returned if local value is not specified.
Enable defaults database by setting DB_DEFAULTS_DIR:
DB_DIR="/tmp/.kv" DB_DEFAULTS_DIR="/tmp/.kv-default" . ./kv-sh
I ended up doing the following. Would prefer the variables in one file, but this bloats the code slightly. How does this read thing work? You can store multiple variables in a seperate file, say variables.txt, and then have your main program in say main.sh. It might be better to write seperate scripts for loading and saving variables though.
For varibles.txt:
A=0
B=0
C=0
For main.sh:
#!/bin/bash
#reload variables
A=`cat ./variables.txt|grep "A="|cut -d"=" -f2`
B=`cat ./variables.txt|grep "B="|cut -d"=" -f2`
C=`cat ./variables.txt|grep "C="|cut -d"=" -f2`
#print variables
printf "$A\n"
printf "$B\n"
printf "$C\n"
#update variables
A=$((($A+1)))
B=$((($B+2)))
C=$((($C+3)))
#save variables to file
#for A
#remove entry for A
cat ./variables.txt|grep -v "A=">>./tmp.txt
#save entry for A
printf "A=$A\n">>./tmp.txt
#move tmp.txt to variables.txt
mv ./tmp.txt ./variables.txt
#for B
#remove entry for B
cat ./variables.txt|grep -v "B=">>./tmp.txt
#save entry for B
printf "B=$B\n">>./tmp.txt
#move tmp.txt to variables.txt
mv ./tmp.txt ./variables.txt
#for C
#remove entry for C
cat ./variables.txt|grep -v "C=">>./tmp.txt
#save entry for C
printf "C=$C\n">>./tmp.txt
#move tmp.txt to variables.txt
mv ./tmp.txt ./variables.txt

How to source a variable list pulled using sqlplus in bash without creating a file

Im trying to source a variable list which is populated into one single variable in bash.
I then want to source this single variable to the contents (which are other variables) of the variable are available to the script.
I want to achieve this without having to spool the sqlplus file then source this file (this already works as I tried it).
Please find below what Im trying:
#!/bin/bash
var_list=$(sqlplus -S /#mydatabase << EOF
set pagesize 0
set trimspool on
set headsep off
set echo off
set feedback off
set linesize 1000
set verify off
set termout off
select varlist from table;
EOF
)
#This already works when I echo any variable from the list
#echo "$var_list" > var_list.dat
#. var_list.dat
#echo "$var1, $var2, $var3"
#Im trying to achieve the following
. $(echo "var_list")
echo "$any_variable_from_var_list"
The contents of var_list from the database are as follows:
var1="Test1"
var2="Test2"
var3="Test3"
I also tried sourcing it other ways such as:
. <<< $(echo "$var_list")
. $(cat "$var_list")
Im not sure if I need to read in each line now using a while loop.
Any advice is appreciated.
You can:
. /dev/stdin <<<"$varlist"
<<< is a here string. It redirects the content of data behind <<< to standard input.
/dev/stdin represents standard input. So reading from the 0 file descriptor is like opening /dev/stdin and calling read() on resulting file descriptor.
Because source command needs a filename, we pass to is /dev/stdin and redirect the data to be read to standard input. That way source reads the commands from standard input thinking it's reading from file, while we pass our data to the input that we want to pass.
Using /dev/stdin for tools that expect a file is quite common. I have no idea what references to give, I'll link: bash manual here strings, Posix 7 base definitions 2.1.1p4 last bullet point, linux kernel documentation on /dev/ directory entires, bash manual shell builtins, maybe C99 7.19.3p7.
I needed a way to store dotenv values in files locally and vars for DevOps pipelines, so I could then source to the runtime environment on demand (from file when available and vars when not). More though, I needed to store different dotenv sets in different vars and use them based on the source branch (which I load to $ENV in .gitlab-ci.yml via export ENV=${CI_COMMIT_BRANCH:=develop}). With this I'll have developEnv, qaEnv, and productionEnv, each being a var containing it's appropriate dotenv contents (being redundant to be clear.)
unset FOO; # Clear so we can confirm loading after
ENV=develop; #
developEnv="VERSION=1.2.3
FOO=bar"; # Creating a simple dotenv in a var, with linebreak (as the files passed through will have)
envVarName=${ENV}Env # Our dynamic var-name
source <(cat <<< "${!envVarName}") # Using the dynamic name,
echo $FOO;
# bar

Exporting environment variables for local development and heroku deployment

I would like to setup some files for development, staging and production with environment variables, for example:
application_root/development.env
KEY1=value1
KEY2=value2
There would be similar files staging.env and production.env.
I am looking for a couple different bash scripts which would allow the loading of all these variables in either development or staging/production.
In local development I want to effectively run export KEY1=value1 for each line in the file.
For staging/production I will be deploying to Heroku and would like to effectively run heroku config:set -a herokuappname KEY1=value1 for each line in the staging or production.env files.
I know there are some gems designed for doing this but it seems like this might be pretty simple. I also like the flexibility of having the .env files as simple lists of keys and values and not specifically being tied to any language/framework. If I would have to change something about the way these variables need to be loaded it would be a matter of changing the script but not the .env files.
In the simplest form, you can load the key-value pairs into a bash array as follows:
IFS=$'\n' read -d '' -ra nameValuePairs < ./development.env
In Bash v4+, it's even simpler:
readarray -t nameValuePairs < ./development.env
You can then pass the resulting "${nameValuePairs[#]}" array to commands such as export or heroku config:set ...; e.g.:
export "${nameValuePairs[#]}"
Note, however, that the above only works as intended if the input *.env file meets all of the following criteria:
the keys are syntactically valid shell variable names and the lines have the form <key>=<value>, with no whitespace around =
the lines contain no quoting and no leading or trailing whitespace
there are no empty/blank lines or comment lines in the file.
the values are confined to a single line each.
A different approach is needed with files that do not adhere to this strict format; for instance, this related question deals with files that may contain quoted values.
Below is the source code for a bash script named load_env (the .sh suffix is generally not necessary and ambiguous):
You'd invoke it with the *.env file of interest, and it would perform the appropriate action (running heroku config:set … or export) based on the filename.
However, as stated, you must source the script (using source or its effective bash alias, .) in order to create environment variables (export) visible to the current shell.
To prevent obscure failures, the script complains if you pass a development.env file and have invoked the script without sourcing.
Examples:
./load_env ./staging.dev
. ./load_env ./development.dev # !! Note the need to source
load_env source code
#!/usr/bin/env bash
# Helper function that keeps its aux. variables localized.
# Note that the function itself remains defined after sourced invocation, however.
configOrExport() {
local envFile=$1 doConfig=0 doExport=0 appName
case "$(basename "$envFile" '.env')" in
staging)
doConfig=1
# Set the desired app name here.
appName=stagingapp
;;
production)
doConfig=1
# Set the desired app name here.
appName=productionapp
;;
development)
doExport=1
;;
*)
echo "ERROR: Invalid or missing *.env file name: $(basename "$envFile" '.env')" >&2; exit 2
esac
# Make sure the file exists and is readable.
[[ -r "$envFile" ]] || { echo "ERROR: *.env file not found or not readable: $envFile" >&2; exit 2; }
# If variables must be exported, make sure the script is being sourced.
[[ $doExport -eq 1 && $0 == "$BASH_SOURCE" ]] && { echo "ERROR: To define environment variables, you must *source* this script." >&2; exit 2; }
# Read all key-value pairs from the *.env file into an array.
# Note: This assumes that:
# - the keys are syntactically valid shell variable names
# - the lines contain no quoting and no leading or trailing whitespace
# - there are no empty/blank lines or comment lines in the file.
IFS=$'\n' read -d '' -ra nameValuePairs < "$envFile"
# Run configuration command.
(( doConfig )) && { heroku config:set -a "$appName" "${nameValuePairs[#]}" || exit; }
# Export variables (define as environment variables).
(( doExport )) && { export "${nameValuePairs[#]}" || exit; }
}
# Invoke the helper function.
configOrExport "$#"

Bash store variable between multiples runs [duplicate]

I've made a bash script which I run every hour with crontab, and I need to store one variable so that I can access it the next time I run it. The script changes the variable every time it runs, so I can't hardcode it in. Right now I am writing it to a txt file and then reading it back. Is there a better way to do it than this? And the way I am reading the txt file is something I found on here, I don't understand it, and it's kinda clunky. Is there not a built in command for this? Anyway, here's the applicable code, with some of the variables changed to make it easier to read.
while read x; do
var=$x
done < var.txt
# Do some stuff, change var to a new value
echo $var > var.txt
The variable is only a single integer, so the text file feels overkill.
There's no need to use var; x will be in scope for the current shell. Alternately,
read var < var.txt
# do stuff with var
echo $var > var.txt
I recommend using a simple text file to store the variable. However, there is the (highly questionable) option of a self-modifying script. FOR ENTERTAINMENT PURPOSES ONLY!
#!/bin/bash
read val < <( tail -n 1 "$0" )
(( val++ ))
echo "$val"
tmp=$(mktemp /tmp/XXXXXXX)
sed '$s/.*/'$val'/' "$0" > "$tmp"
mv "$tmp" "$0"
exit
0
The key is to have the next-to-last line be the exit command, so nothing after it will execute. The last line is the variable value you want to persist. When the script runs, it reads from its own last line. Before it exits, it uses sed to write a copy of itself toa temp file, with the last line modified with the current value of the persistent value. Then we overwrite the current script with the temp file (assuming we will have permission to do so).
But seriously? Don't do this.
I know this is an old question. But, I still decide to post my solution here in the hope that it might be helpful to others who come here in search of a way to serialize env vars between sessions.
The simple way is just write "var_name=var_value" into a file, say "./environ". And then "source ./envrion" in following sessions. For example:
echo "var1=$var1" > ./environ
A more comprehensive (and elegant?) way which persist all attributes of variables is to make use of "declare -p":
declare -p var1 var2 > ./environ
# NOTE: no '$' before var1, var2
Later on, after "source ./envrion" you can get var1 var2 with all attributes restored in addition to its value. This means it can handle arrays, integers etc.
One caveat for the "declare -p xx", though: if you wrap the "source ./environ" into a function, then all sourced variables are visible within the function only because "declare" by default declares variables as local ones. To circumvent this, you may either "source" out of any function (or in your "main" function) or modify the ./environ to add "-g" after declare (which makes corresponding variable global). For instance:
sed -i 's/^declare\( -g\)*/declare -g/' ./environ
# "\( -g\)?" ensure no duplication of "-g"
1- You can simplify your script, as you only have one variable
var=`cat var.txt`
# Do some stuff, change var to a new value
echo $var > var.txt
2- You can store your variable in the environment:
export var
# Do some stuff, change var to a new value
But you'll need to prompt it . script.ksh (dot at the beggining). But it shouldn't have 'exit' in it and i'm not sure this would work in cron...
Depending on your use case this might be overkill, but if you need to store and keep track of multiple variables (or from multiple scripts) then consider using sqlite which has a command line interface (sqlite3), and which is usually preinstalled ootb on linux/macos systems.
DB='storage.db'
KEY1='eurusd'
VAL1=1.19011
KEY2='gbpeur'
VAL2=1.16829
# create table if not present (ONLY NEEDS TO BE RUN ONCE)
QUERY_CREATE="CREATE TABLE IF NOT EXISTS records (id INTEGER PRIMARY KEY, name TEXT NOT NULL, value NUMERIC NOT NULL);"
sqlite3 "$DB" "$QUERY_CREATE"
# write a key-value pair to database (creates a new row each time)
QUERY_INSERT="INSERT INTO records(name, value) VALUES ('${KEY1}', '${VAL1}');"
sqlite3 "$DB" "$QUERY_INSERT"
# write a key-value pair to database (REPLACE previous value!)
# using 42 as a hard-coded row ID
QUERY_REPLACE="REPLACE INTO records(id, name, value) VALUES (42, '${KEY2}', '${VAL2}');"
sqlite3 "$DB" "$QUERY_REPLACE"
# read value from database
QUERY_SELECT1="SELECT value FROM records WHERE name='${KEY1}';"
QUERY_SELECT2="SELECT value FROM records WHERE name='${KEY2}';"
echo "***** $KEY1 *****"
# store db value in a variable
db_value1=$(sqlite3 "$DB" "$QUERY_SELECT1")
echo $db_value1
## OUTPUT: 1.19011
echo "***** $KEY2 *****"
db_value2=$(sqlite3 "$DB" "$QUERY_SELECT2")
echo $db_value2
## OUTPUT: 1.16829
NOTE: If you do not explicitly pass the row ID then a new row will be added on each script invocation. To always update into the same row use REPLACE INTO with an explicit ID (e.g. 42 as can be seen in the REPLACE INTO... statement). Run the script multiple times to see how the output differs for KEY1 and KEY2.
NOTE2: In this example the values are numeric, if you need to store strings then in CREATE TABLE instead of NUMERIC use TEXT.
And if you want an open-source GUI for visualising the database then DB Browser for SQLite is available for mac/linux/windows (there are dozens more).
To store multiple variables between runs, a solution I considered is to save them under the format my_var=my_value in a separated file.
Then, I include two function to set and retrieve the variables
In the file storing the variables and their values:
Let's call this file context.dat
# Here I store the variables and their values
my_var_x=1
my_var_y=boo
my_var_z=0
In the actual script:
Let's call the file multiple_run.sh
context=./context.dat
function update_variables(){
# update the variable context
source $context
}
function set_variable(){
# store variable
variable=$1 #variable to be set
value=$2 # value to give to the value
# modify the file storing the value
sed -i 's/'${variable}'.*/'${variable}'='${value}'/' $context
}
##################
# Test code
echo var_x
update_variables
echo var_x
# do something
set_variable var_x 2
echo $var_x
This is one approach among other. With such method, you need to create the storing file before and create each line for each variable. Besides, the context.dat is a priori accessible by any other script.
Just discovered this great simple project (a rewritten fork). A simple yet powerful key/value pair store for bash. Looks perfect. Behind the scenes each database is a directory, each key is a file, and the values are in the file.
https://github.com/imyller/kv-sh
Tiny key-value database
Configurable database directory (default: ~/.kv-sh)
Used by importing functions via $ . ./kv-sh
Full database dump/restore
Support for secondary read-only defaults database
. ./kv-sh # import kv-sh functions (use default database directory; see
configuration environment variables for available options)
kvset <key> <value> # assign value to key
kvget <key> # get value of key
kvdel <key> # delete key
kvexists <key> # check if key exists
kvkeys {-l|-d|-a} # list all keys (-l local only, -d default only, -a all (default))
kvlist {-a} # list all key/value pairs (-a all keys, including default)
kvdump {-a} # database dump (-a all keys, including default)
kvimport # database import (overwrite)
kvrestore # database restore (clear and restore)
kvclear # clear database
Defaults database
kv-sh supports secondary read-only defaults database. If enabled, keys-value pairs from default value database are returned if local value is not specified.
Enable defaults database by setting DB_DEFAULTS_DIR:
DB_DIR="/tmp/.kv" DB_DEFAULTS_DIR="/tmp/.kv-default" . ./kv-sh
I ended up doing the following. Would prefer the variables in one file, but this bloats the code slightly. How does this read thing work? You can store multiple variables in a seperate file, say variables.txt, and then have your main program in say main.sh. It might be better to write seperate scripts for loading and saving variables though.
For varibles.txt:
A=0
B=0
C=0
For main.sh:
#!/bin/bash
#reload variables
A=`cat ./variables.txt|grep "A="|cut -d"=" -f2`
B=`cat ./variables.txt|grep "B="|cut -d"=" -f2`
C=`cat ./variables.txt|grep "C="|cut -d"=" -f2`
#print variables
printf "$A\n"
printf "$B\n"
printf "$C\n"
#update variables
A=$((($A+1)))
B=$((($B+2)))
C=$((($C+3)))
#save variables to file
#for A
#remove entry for A
cat ./variables.txt|grep -v "A=">>./tmp.txt
#save entry for A
printf "A=$A\n">>./tmp.txt
#move tmp.txt to variables.txt
mv ./tmp.txt ./variables.txt
#for B
#remove entry for B
cat ./variables.txt|grep -v "B=">>./tmp.txt
#save entry for B
printf "B=$B\n">>./tmp.txt
#move tmp.txt to variables.txt
mv ./tmp.txt ./variables.txt
#for C
#remove entry for C
cat ./variables.txt|grep -v "C=">>./tmp.txt
#save entry for C
printf "C=$C\n">>./tmp.txt
#move tmp.txt to variables.txt
mv ./tmp.txt ./variables.txt

Resources