This question already has an answer here:
How to store state between two consecutive runs of a bash script
(1 answer)
Closed 1 year ago.
I wrote this two simple functions to backup and restore the content of a bash dictionary:
declare -A dikv
declare -A dict
backup_dikv()
{
FILE=$1
rm -f $FILE
for k in "${!dikv[#]}"
do
echo "$k,${dikv[$k]}" >> $FILE
done
}
restore_dict()
{
FILE=$1
for i in $(cat $FILE)
do
key=$(echo $i | cut -f 1 -d ",")
val=$(echo $i | cut -f 2 -d ",")
dict[$key]=$val
done
}
# Initial values
dikv=( ["k1"]="v1" ["k2"]="v2" ["k3"]="v3" ["k4"]="v4")
backup_dikv /tmp/backup
restore_dict /tmp/backup
echo "${!dict[#]}"
echo "${dict[#]}"
My questions:
As you can see, these two funcions are very limited as the name of the backuped (dikv) and restored (dict) dictionaries is hardcoded. I would like to pass the dictionary as an input ($2) argument, but I don't know how to pass dictionaries as funcion arguments in bash.
Is this method to write keys and values into a file, using a string format ("key","value") and parse that string format to restore the dictionary, the unique / most eficient way to do that? Do you know some better mechanism to backup and restore a dictionary?
Thanks!
Use declare -p to reliably serialize variables regardless of their type
#!/usr/bin/env bash
if [ -f saved_vars.sh ]; then
# Restore saved variables
. saved_vars.sh
else
# No saved variables, so lets populate them
declare -A dikv=([foo]="foo bar from dikv" [bar]="bar baz from dikv")
declare -A dict=([baz]="baz qux from dict" [qux]="qux corge from dict")
fi
# Serialise backup dikv dict into the saved_vars.sh file
declare -p dikv dict >'saved_vars.sh'
printf %s\\n "${!dict[#]}"
printf %s\\n "${dict[#]}"
printf %s\\n "${!dikv[#]}"
printf %s\\n "${dikv[#]}"
Found a way to pass arrays to functions, using local -n in this way:
declare -A dikv
declare -A dict
backup_dictionary()
{
local -n dict_ref=$1
FILE=/tmp/backup
for k in "${!dict_ref[#]}"
do
echo "$k,${dict_ref[$k]}" >> $FILE
done
}
restore_dictionary()
{
local -n dict_ref=$1
FILE=/tmp/backup
for i in $(cat $FILE)
do
key=$(echo $i | cut -f 1 -d ",")
val=$(echo $i | cut -f 2 -d ",")
dict_ref[$key]=$val
done
}
dikv=( ["k1"]="v1" ["k2"]="v2" ["k3"]="v3" ["k4"]="v4")
backup_dictionary dikv
restore_dictionary dict
echo "${!dict[#]}"
echo "${dict[#]}"
Still trying to find the most convenient way to backup and restore the content.
Related
I want to read into bash associative array the content of one yaml file, which is a simple key value mapping.
Example map.yaml
---
a: "2"
b: "3"
api_key: "somekey:thatcancontainany#chara$$ter"
the key can contain any characters excluding space
the value can contain any characters without limitations $!:=#etc
What will always be constant is the separator between key and value is :
the script proceses.sh
#!/usr/bin/env bash
declare -A map
# how to read here into map variable, from map.yml file
#map=populatesomehowfrommap.yaml
for key in "${!map[#]}"
do
echo "key : $key"
echo "value: ${map[$key]}"
done
I tried to play around with yq tool, similar to json tool jq but did not have success yet.
With the following limitations:
simple YAML key: "value" in single lines
keys cannot contain :
values are always wrapped in "
#!/usr/bin/env bash
declare -A map
regex='^([^:]+):[[:space:]]+"(.*)"[[:space:]]*$'
while IFS='' read -r line
do
if [[ $line =~ $regex ]]
then
printf -v map["${BASH_REMATCH[1]}"] '%b' "${BASH_REMATCH[2]}"
else
echo "skipping: $line" 1>&2
fi
done < map.yaml
Update
Here's a robust solution using yq, which would be simpler if the builtin #tsv filter implemented the lossless TSV escaping rules instead of the CSV ones.
#!/usr/bin/env bash
declare -A map
while IFS=$'\t' read key value
do
printf -v map["$key"] '%b' "$value"
done < <(
yq e '
to_entries | .[] |
[
(.key | sub("\\","\\") | sub("\n","\n") | sub("\r","\r") | sub("\t","\t")),
(.value | sub("\\","\\") | sub("\n","\n") | sub("\r","\r") | sub("\t","\t"))
] |
join(" ")
' map.yaml
)
note: the join needs a literal Tab
One way, is by letting yq output each key/value pair on a single line, in the following syntax:
key#value
Then we can use bash's IFS to split those values.
The # is just an example and can be replaced with any single char
This works, but please note the following limitations:
It does not expect nested values, only a flat list`
The field seperator (# in the example) does not exist in the YAML key/value's
#!/bin/bash
declare -A arr
while IFS="#" read -r key value
do
arr[$key]="$value"
done < <(yq e 'to_entries | .[] | (.key + "#" + .value)' input.yaml)
for key in "${!arr[#]}"
do
echo "key : $key"
echo "value: ${arr[$key]}"
done
$ cat input.yaml
---
a: "bar"
b: "foo"
$
$
$ ./script.sh
key : a
value: bar
key : b
value: foo
$
I used #Fravadona s answer so will mark it as answer
After some modification to my use case, what worked for me looks like:
DEFS_PATH="definitions"
declare -A ssmMap
for file in ${DEFS_PATH}/*
do
filename=$(basename -- "$file")
projectName="${filename%.*}"
regex='^([^:]+):[[:space:]]*"(.*)"[[:space:]]*$'
while IFS='' read -r line
do
if [[ $line =~ $regex ]]
then
value="${BASH_REMATCH[2]}"
value=${value//"{{ ssm_env }}"/$INFRA_ENV}
value=${value//"{{ ssm_reg }}"/$SSM_REGION}
value=${value//"{{ projectName }}"/$projectName}
printf -v ssmMap["${BASH_REMATCH[1]}"] '%b' "$value"
else
echo "skipping: $line" 1>&2
fi
done < "$file"
done
Basically in real use case I have one folder where yaml definitions are located. I iterate over all of them to form the associative array ssmMap
My shell script look like this
i="10 ID:794 A:TX-SPN S:0"
A=`echo $i | cut -d" " -f 3| cut -d":" -f2` # gives TX-SPN
ID=`echo $i | cut -d" " -f 2|cut -d":" -f2` # gives 794
sZeroCount=`echo $i | cut -d" " -f 1` # gives 10
by above commands,I am able to get the values for A,ID,sZeroCount variables, since the value for i contains only one entry, value of i not limited to 1 it may go upto 1000. Is there any better approach in which I can obtain those values.
With an array. Split string i with separator space and : to array a:
i="10 ID:794 A:TX-SPN S:0"
IFS=" :" a=($i)
echo "${a[4]}" # TX-SPN
echo "${a[2]}" # 794
echo "${a[0]}" # 10
With chepner's bugfix:
i="10 ID:794 A:TX-SPN S:0"
IFS=": " read -a a <<< "$i"
echo "${a[4]}" # TX-SPN
echo "${a[2]}" # 794
echo "${a[0]}" # 10
With this piece of code you can convert your line into a proper associative array:
declare -A dict
for token in START:$i # choose a value for START that is not a key
do
IFS=: read key value <<< "$token"
dict["$key"]=$value
done
You can dump the result using declare -p dict:
declare -A dict='([A]="TX-SPN" [S]="0" [ID]="794" [START]="10" )'
And you can access the contents e. g. using this: echo "${dict[A]}"
TX-SPN
The start value (the 10 in your example) can be accessed as "${dict[START]}". Choose a value for START that doesn't appear as key in your input.
If you want to iterate over a lot of lines like your $i, you can do it like this:
while read i
do
declare -A dict
# ... add code from above ...
done < input_file
The advantage of using associative arrays is that this way you can access your values in a much more understandable way, i. e. by using the keys instead of some arbitrary indexes which can easily be mixed up and which need constant maintenance when changing your code.
I'm trying to split key value pairs (around an = sign) which I then use to edit a config file, using bash. But I need an alternative to the <<< syntax for IFS.
The below works on my host system, but when i log in to my ubuntu virtual machine through ssh I have the wrong bash version. Whatever I try, <<< fails. (I am definitely calling the right version of bash at the top of the file, using #!/bin/bash (and I've tried #!/bin/sh etc too)).
I know I can use IFS as follows on my host mac os x system:
var="word=hello"
IFS='=' read -a array <<<"$var"
echo ${array[0]} ${array[1]]}
#alternative -for calling through e.g. sh file.sh param=value
for var in "$#"
do
IFS='=' read -a array <<<"$var"
echo ${array[0]} ${array[1]]}
done
#alternative
IFS='=' read -ra array <<< "a=b"
declare -p array
echo ${array[0]} ${array[1]}
But this doesn't work on my vm.
I also know that I can should be able to switch the <<< syntax through backticks, $() or echo "$var" | ... but I can't get it to work - as follows:
#Fails
IFS='=' read -ra myarray -d '' <"$var"
echo ${array[0]} ${array[1]]}
#Fails
echo "$var" | IFS='=' read -a array
echo ${array[0]} ${array[1]]}
#fails
echo "a=b" | IFS='=' read -a array
declare -p array
echo ${array[0]} ${array[1]}
Grateful for any pointers as I'm really new to bash.
Your first failed attempt is because < and <<< are different operators. < opens the named file.
The second fails because read only sets the value of array in the subshell started by the pipe; that shell exits after the completion of the pipe, and array disappears with it.
The third fails for the same reason as the second; the declare that follows doesn't make any difference.
Your attempts have been confounded because you have to use the variable in the same sub-shell as read.
$ echo 'foo=bar' | { IFS='=' read -a array; echo ${array[0]}; }
foo
And if you want your variable durable (ie, outside the sub-shell scope):
$ var=$(echo 'foo=bar' | { IFS='=' read -a array; echo ${array[0]}; })
$ echo $var
foo
Clearly, it isn't pretty.
Update: If -a is missing, that suggests you're out of the land of arrays. You can try parameter substitution:
str='foo=bar'
var=${str%=*}
val=${str#*=}
And if that doesn't work, fall back to good ole cut:
str='foo=bar'
var=$(echo $str | cut -f 1 -d =)
val=$(echo $str | cut -f 2 -d =)
I have a shell script and a common configuration file where all the generic path, username and other values are stored. I want to get the value from this configuration file while I am running the sh script.
example:
sample.conf
pt_user_name=>xxxx
pt_passwd=>Junly#2014
jrnl_source_folder=>x/y/v
pt_source_folder=>/x/y/r/g
css_source_folder=>/home/d/g/h
Now i want get some thing like this in my sh script.
cd $css_source_folder
this command inside the shell script should take me to the location d/g/h while the script is running.
Is there any way to achieve this other than with grep and awk??
Thanks
Rinu
If you want to read from conf file everytime then grep and cut might help you,
suppose you need value for css_source_folder property
prop1="css_source_folder" (I am assuming you know property name whose value you want)
value_of_prop1=`grep $prop1 sample.conf| cut -f2 -d "=" | cut -f2 -d ">"`
like,
[db2inst2#pegdb2 ~]$ vi con.conf
[db2inst2#pegdb2 ~]$ grep css_source_folder con.conf
css_source_folder=>/home/d/g/h
[db2inst2#pegdb2 ~]$ value=`grep css_source_folder con.conf | cut -f2 -d "="`
[db2inst2#pegdb2 ~]$ echo $value
>/home/d/g/h
[db2inst2#pegdb2 ~]$ value=`grep css_source_folder con.conf | cut -f2 -d "=" | cut -f2 -d ">"`
[db2inst2#pegdb2 ~]$ echo $value
/home/d/g/h
If you want to read all properties at once, then apply loop and this will solve the purpose
Yes, you can get the configuration names and values relatively simple and associate them through array indexes. Reading your config can be done like this:
#!/bin/bash
test -r "$1" || { echo "error: unable to read conf file [$1]\n"; exit 1; }
declare -a tag
declare -a data
let index=0
while read line || test -n "$line"; do
tag[index]="${line%%\=*}"
data[index]="${line##*\>}"
((index++))
done < "$1"
for ((i=0; i<${#tag[#]}; i++)); do
printf " %18s %s\n" "${tag[$i]}" "${data[$i]}"
done
After reading the config file, you then have the config name tags and config values stored in the arrays tag and value, respectively:
pt_user_name xxxx
pt_passwd Junly#2014
jrnl_source_folder x/y/v
pt_source_folder /x/y/r/g
css_source_folder /home/d/g/h
At that point, it is a matter of determining how you will use them, whether as a password or as a directory. You may have to write a couple of functions, but the basic function of given a tag, get the correct data can be done like this:
function getvalue {
test -n "$1" || { echo "error in getvalue, no data supplied"; return 1; }
for ((i=0; i<${#tag[#]}; i++)); do
if test "$1" = "${tag[$i]}"; then
echo " eval cmd ${data[$i]}"
return $i
fi
done
return -1
}
echo -e "\nget value for 'jrnl_source_folder'\n"
getvalue "jrnl_source_folder"
The function will return the index of the data value and can execute any command needed. You seem to have directory paths and passwords, so you may need a function for each. To illustrate, the output of the example is:
get value for jrnl_source_folder
eval cmd x/y/v
You can also use an associative array in later versions of BASH to store the tag and data in a single associative array. You may also be able to use indirect references on the tag and data values to process them. I simply took the straight forward approach in the example.
Try this eval $(awk -F'=>' '{print $1"=\""$2"\";"}' sample.conf):
EX:
eval $(awk -F'=>' '{print $1"=\""$2"\";"}' sample.conf); echo $pt_user_name
xxxx
Using sed :
eval $(sed -re 's/=>/="/g' -e 's/$/";/g' sample.conf); echo $pt_passwd
Junly#2014
Using perl :
eval $(perl -F'=>' -alne 'print "$F[0]=\"$F[1]\";"' sample.conf); echo $pt_source_folder
/x/y/r/g
Using tr :
eval $(tr -d '>' <sample.conf); echo "$css_source_folder"
/home/d/g/h
PS. Using tr blindly to remove > may cause undesirable results depending on the content of sample.conf, but for the one provided works fine.
I'm writing a bash script to modify a config file which contains a bunch of key/value pairs. How can I read the key and find the value and possibly modify it?
A wild stab in the dark for modifying a single value:
sed -c -i "s/\($TARGET_KEY *= *\).*/\1$REPLACEMENT_VALUE/" $CONFIG_FILE
assuming that the target key and replacement value don't contain any special regex characters, and that your key-value separator is "=". Note, the -c option is system dependent and you may need to omit it for sed to execute.
For other tips on how to do similar replacements (e.g., when the REPLACEMENT_VALUE has '/' characters in it), there are some great examples here.
Hope this helps someone. I created a self contained script, which required config processing of sorts.
#!/bin/bash
CONFIG="/tmp/test.cfg"
# Use this to set the new config value, needs 2 parameters.
# You could check that $1 and $2 is set, but I am lazy
function set_config(){
sudo sed -i "s/^\($1\s*=\s*\).*\$/\1$2/" $CONFIG
}
# INITIALIZE CONFIG IF IT'S MISSING
if [ ! -e "${CONFIG}" ] ; then
# Set default variable value
sudo touch $CONFIG
echo "myname=\"Test\"" | sudo tee --append $CONFIG
fi
# LOAD THE CONFIG FILE
source $CONFIG
echo "${myname}" # SHOULD OUTPUT DEFAULT (test) ON FIRST RUN
myname="Erl"
echo "${myname}" # SHOULD OUTPUT Erl
set_config myname $myname # SETS THE NEW VALUE
Assuming that you have a file of key=value pairs, potentially with spaces around the =, you can delete, modify in-place or append key-value pairs at will using awk even if the keys or values contain special regex sequences:
# Using awk to delete, modify or append keys
# In case of an error the original configuration file is left intact
# Also leaves a timestamped backup copy (omit the cp -p if none is required)
CONFIG_FILE=file.conf
cp -p "$CONFIG_FILE" "$CONFIG_FILE.orig.`date \"+%Y%m%d_%H%M%S\"`" &&
awk -F '[ \t]*=[ \t]*' '$1=="keytodelete" { next } $1=="keytomodify" { print "keytomodify=newvalue" ; next } { print } END { print "keytoappend=value" }' "$CONFIG_FILE" >"$CONFIG_FILE~" &&
mv "$CONFIG_FILE~" "$CONFIG_FILE" ||
echo "an error has occurred (permissions? disk space?)"
sed "/^$old/s/\(.[^=]*\)\([ \t]*=[ \t]*\)\(.[^=]*\)/\1\2$replace/" configfile
So I can not take any credit for this as it is a combination of stackoverflow answers and help from irc.freenode.net #bash channel but here are bash functions now to both set and read config file values:
# https://stackoverflow.com/a/2464883
# Usage: config_set filename key value
function config_set() {
local file=$1
local key=$2
local val=${#:3}
ensureConfigFileExists "${file}"
# create key if not exists
if ! grep -q "^${key}=" ${file}; then
# insert a newline just in case the file does not end with one
printf "\n${key}=" >> ${file}
fi
chc "$file" "$key" "$val"
}
function ensureConfigFileExists() {
if [ ! -e "$1" ] ; then
if [ -e "$1.example" ]; then
cp "$1.example" "$1";
else
touch "$1"
fi
fi
}
# thanks to ixz in #bash on irc.freenode.net
function chc() { gawk -v OFS== -v FS== -e 'BEGIN { ARGC = 1 } $1 == ARGV[2] { print ARGV[4] ? ARGV[4] : $1, ARGV[3]; next } 1' "$#" <"$1" >"$1.1"; mv "$1"{.1,}; }
# https://unix.stackexchange.com/a/331965/312709
# Usage: local myvar="$(config_get myvar)"
function config_get() {
val="$(config_read_file ${CONFIG_FILE} "${1}")";
if [ "${val}" = "__UNDEFINED__" ]; then
val="$(config_read_file ${CONFIG_FILE}.example "${1}")";
fi
printf -- "%s" "${val}";
}
function config_read_file() {
(grep -E "^${2}=" -m 1 "${1}" 2>/dev/null || echo "VAR=__UNDEFINED__") | head -n 1 | cut -d '=' -f 2-;
}
at first I was using the accepted answer's sed solution: https://stackoverflow.com/a/2464883/2683059
however if the value has a / char it breaks
in general it's easy to extract the info with grep and cut:
cat "$FILE" | grep "^${KEY}${DELIMITER}" | cut -f2- -d"$DELIMITER"
to update you could do something like this:
mv "$FILE" "$FILE.bak"
cat "$FILE.bak" | grep -v "^${KEY}${DELIMITER}" > "$FILE"
echo "${KEY}${DELIMITER}${NEWVALUE}" >> "$FILE"
this would not maintain the order of the key-value pairs obviously. add error checking to make sure you don't lose your data.
I have done this:
new_port=$1
sed "s/^port=.*/port=$new_port/" "$CONFIG_FILE" > /yourPath/temp.x
mv /yourPath/temp.x "$CONFIG_FILE"
This will change port= to port=8888 in your config file if you choose 8888 as $1 for example.
Suppose your config file is in below format:
CONFIG_NUM=4
CONFIG_NUM2=5
CONFIG_DEBUG=n
In your bash script, you can use:
CONFIG_FILE=your_config_file
. $CONFIG_FILE
if [ $CONFIG_DEBUG == "y" ]; then
......
else
......
fi
$CONFIG_NUM, $CONFIG_NUM2, $CONFIG_DEBUG is what you need.
After your read the values, write it back will be easy:
echo "CONFIG_DEBUG=y" >> $CONFIG_FILE