I have a below properties file and would like to parse it as mentioned below. Please help in doing this.
.ini file which I created :
[Machine1]
app=version1
[Machine2]
app=version1
app=version2
[Machine3]
app=version1
app=version3
I am looking for a solution in which ini file should be parsed like
[Machine1]app = version1
[Machine2]app = version1
[Machine2]app = version2
[Machine3]app = version1
[Machine3]app = version3
Thanks.
Try:
$ awk '/\[/{prefix=$0; next} $1{print prefix $0}' file.ini
[Machine1]app=version1
[Machine2]app=version1
[Machine2]app=version2
[Machine3]app=version1
[Machine3]app=version3
How it works
/\[/{prefix=$0; next}
If any line begins with [, we save the line in the variable prefix and then we skip the rest of the commands and jump to the next line.
$1{print prefix $0}
If the current line is not empty, we print the prefix followed by the current line.
Adding spaces
To add spaces around any occurrence of =:
$ awk -F= '/\[/{prefix=$0; next} $1{$1=$1; print prefix $0}' OFS=' = ' file.ini
[Machine1]app = version1
[Machine2]app = version1
[Machine2]app = version2
[Machine3]app = version1
[Machine3]app = version3
This works by using = as the field separator on input and = as the field separator on output.
I love John1024's answer. I was looking for exactly that. I have created a bash function that allows me to lookup sections or specific keys based on his idea:
function iniget() {
if [[ $# -lt 2 || ! -f $1 ]]; then
echo "usage: iniget <file> [--list|<section> [key]]"
return 1
fi
local inifile=$1
if [ "$2" == "--list" ]; then
for section in $(cat $inifile | grep "\[" | sed -e "s#\[##g" | sed -e "s#\]##g"); do
echo $section
done
return 0
fi
local section=$2
local key
[ $# -eq 3 ] && key=$3
# https://stackoverflow.com/questions/49399984/parsing-ini-file-in-bash
# This awk line turns ini sections => [section-name]key=value
local lines=$(awk '/\[/{prefix=$0; next} $1{print prefix $0}' $inifile)
for line in $lines; do
if [[ "$line" = \[$section\]* ]]; then
local keyval=$(echo $line | sed -e "s/^\[$section\]//")
if [[ -z "$key" ]]; then
echo $keyval
else
if [[ "$keyval" = $key=* ]]; then
echo $(echo $keyval | sed -e "s/^$key=//")
fi
fi
fi
done
}
So given this as file.ini
[Machine1]
app=version1
[Machine2]
app=version1
app=version2
[Machine3]
app=version1
app=version3
then the following results are produced
$ iniget file.ini --list
Machine1
Machine2
Machine3
$ iniget file.ini Machine3
app=version1
app=version3
$ iniget file.ini Machine1 app
version1
$ iniget file.ini Machine2 app
version2
version3
Again, thanks to #John1024 for his answer, I was pulling my hair out trying to create a simple bash ini parser that supported sections.
Tested on Mac using GNU bash, version 5.0.0(1)-release (x86_64-apple-darwin18.2.0)
You can try using awk:
awk '/\[[^]]*\]/{ # Match pattern like [...]
a=$1;next # store the pattern in a
}
NF{ # Match non empty line
gsub("=", " = ") # Add space around the = character
print a $0 # print the line
}' file
Excellent answers here. I made some modifications to #davfive's function to fit it better to my use case. This version is largely the same except it allows for whitespace before and after = characters, and allows values to have spaces in them.
# Get values from a .ini file
function iniget() {
if [[ $# -lt 2 || ! -f $1 ]]; then
echo "usage: iniget <file> [--list|<section> [key]]"
return 1
fi
local inifile=$1
if [ "$2" == "--list" ]; then
for section in $(cat $inifile | grep "^\\s*\[" | sed -e "s#\[##g" | sed -e "s#\]##g"); do
echo $section
done
return 0
fi
local section=$2
local key
[ $# -eq 3 ] && key=$3
# This awk line turns ini sections => [section-name]key=value
local lines=$(awk '/\[/{prefix=$0; next} $1{print prefix $0}' $inifile)
lines=$(echo "$lines" | sed -e 's/[[:blank:]]*=[[:blank:]]*/=/g')
while read -r line ; do
if [[ "$line" = \[$section\]* ]]; then
local keyval=$(echo "$line" | sed -e "s/^\[$section\]//")
if [[ -z "$key" ]]; then
echo $keyval
else
if [[ "$keyval" = $key=* ]]; then
echo $(echo $keyval | sed -e "s/^$key=//")
fi
fi
fi
done <<<"$lines"
}
For taking disparate sectional and tacking the section name (including 'no-section'/Default together) to each of its related keyword (along with = and its keyvalue), this one-liner AWK will do the trick coupled with a few clean-up regex.
ini_buffer="$(echo "$raw_buffer" | awk '/^\[.*\]$/{obj=$0}/=/{print obj $0}')"
Will take your lines and output them like you wanted:
+++ awk '/^\[.*\]$/{obj=$0}/=/{print obj $0}'
++ ini_buffer='[Machine1]app=version1
[Machine2]app=version1
[Machine2]app=version2
[Machine3]app=version1
[Machine3]app=version3'
A complete solution to the INI-format File
As Clonato, INI-format expert said that for the latest INI version 1.4 (2009-10-23), there are several other tricky aspects to the INI file:
character set constraint for section name
character set constraint for keyword
And lastly is for the keyvalue to be able to handle pretty much anthing that is not used in the section and keyword name; that includes nesting of quotes inside a pair of same single/double-quote.
Except for the nesting of quotes, a INI-format Github complete solution to parsing INI-format file with default section:
# syntax: ini_file_read <raw_buffer>
# outputs: formatted bracket-nested "[section]keyword=keyvalue"
ini_file_read()
{
local ini_buffer raw_buffer hidden_default
raw_buffer="$1"
# somebody has to remove the 'inline' comment
# there is a most complex SED solution to nested
# quotes inline comment coming ... TBA
raw_buffer="$(echo "$raw_buffer" | sed '
s|[[:blank:]]*//.*||; # remove //comments
s|[[:blank:]]*#.*||; # remove #comments
t prune
b
:prune
/./!d; # remove empty lines, but only those that
# become empty as a result of comment stripping'
)"
# awk does the removal of leading and trailing spaces
ini_buffer="$(echo "$raw_buffer" | awk '/^\[.*\]$/{obj=$0}/=/{print obj $0}')" # original
ini_buffer="$(echo "$ini_buffer" | sed 's/^\s*\[\s*/\[/')"
ini_buffer="$(echo "$ini_buffer" | sed 's/\s*\]\s*/\]/')"
# finds all 'no-section' and inserts '[Default]'
hidden_default="$(echo "$ini_buffer" \
| egrep '^[-0-9A-Za-z_\$\.]+=' | sed 's/^/[Default]/')"
if [ -n "$hidden_default" ]; then
echo "$hidden_default"
fi
# finds sectional and outputs as-is
echo "$(echo "$ini_buffer" | egrep '^\[\s*[-0-9A-Za-z_\$\.]+\s*\]')"
}
The unit test for this StackOverflow post is included in this file:
https://github.com/egberts/bash-ini-file
Source:
https://github.com/egberts/easy-admin/blob/main/test/section-regex.sh
https://cloanto.com/specs/ini/#escapesequences
Related
I have a file with the following entries:
foop07_bar2_20190423152612.zip
foop07_bar1_20190423153115.zip
foop08_bar2_20190423152612.zip
foop08_bar1_20190423153115.zip
where
foop0* = host
bar* = fp
I would like to read the file and create 3 variables, the whole file name, host and fp (which stands for file_path_differentiator).
I am using read to take the first line and get my whole file name variable, I though I could then feed this into awk to grab the next two variables, however the first method of variable insertion creates an error and the second gives me all the variables.
I would like to loop each line, as I wish to use these variables to ssh to the host and grab the file
#!/bin/bash
while read -r FILE
do
echo ${FILE}
host=`awk 'BEGIN { FS = "_" } ; { print $1 }'<<<<"$FILE"`
echo ${host}
path=`awk -v var="${FILE}" 'BEGIN { FS = "_" } ; { print $2 }'`
echo ${path}
done <zips_not_received.csv
Expected Result
foop07_bar2_20190423152612.zip
foop07
bar2
foop07_bar1_20190423153115.zip
foop07
bar1
Actual Result
foop07_bar2_20190423152612.zip
/ : No such file or directoryfoop07_bar2_20190423152612.zip
bar2 bar1 bar2 bar1
You can do this alone with bash, without using any external tool.
while read -r file; do
[[ $file =~ (.*)_(.*)_.*\.zip ]] || { echo "invalid file name"; exit 1; }
host="${BASH_REMATCH[1]}"
path="${BASH_REMATCH[2]}"
echo "$file"
echo "$host"
echo "$path"
done < zips_not_received.csv
typical...
Managed to work a solution after posting...
#!/bin/bash
while read -r FILE
do
echo ${FILE}
host=`echo "$FILE" | awk -F"_" '{print $1}'`
echo $host
path=`echo "$FILE" | awk -F"_" '{print $2}'`
echo ${path}
done <zips_not_received.csv
not sure on the elegance or its correctness as i am using echo to create variable...but i have it working..
Assuming there is no space or _ in your "file name" that are part of the host or path
just separate line before with sed, awk, ... if using default space separator (or use _ as argument separator in batch). I add the remove of empty line value as basic security seeing your sample.
sed 's/_/ /g;/[[:blank:]]\{1,\}/d' zips_not_received.csv \
| while read host path Ignored
do
echo "${host}"
echo "${path}"
done
Please find below function (keyvalue.sh) that parses a configuration file with key value pairs to return the value for passed argument key.
It works fine, if the value don't have any = (equals to operator), but if the value contains = (equals to) operator, it returns incorrect value.
function getValueForKey(){
while read -r line
do
#echo $line
key=`echo $line | cut -d = -f1`
value=`echo $line | cut -d = -f2`
if [ "$2" == "$key" ]; then
echo $value
fi;
done < "$1"
}
Please find below sample key-value configuration file (keys.txt) :-
Scala_Url="http://downloads.lightbend.com/scala/2.11.8/scala-2.11.8.tgz"
Zookeeper_Url="http://www-eu.apache.org/dist/zookeeper/stable/zookeeper-3.4.10.tar.gz"
Eclipse_Url="http://www.eclipse.org/downloads/download.php?file=/technology/epp/downloads/release/neon/3/eclipse-jee-neon-3-win32-x86_64.zip&mirror_id=1135"
Also, find below sample execution :-
$ls
keys.txt keyvalue.sh
$
$
$
$cat keys.txt
Scala_Url="http://downloads.lightbend.com/scala/2.11.8/scala-2.11.8.tgz"
Zookeeper_Url="http://www-eu.apache.org/dist/zookeeper/stable/zookeeper-3.4.10.tar.gz"
Eclipse_Url="http://www.eclipse.org/downloads/download.php?file=/technology/epp/downloads/release/neon/3/eclipse-jee-neon-3-win32-x86_64.zip&mirror_id=1135"
$
$
$. keyvalue.sh
$
$getValueForKey keys.txt "Scala_Url"
"http://downloads.lightbend.com/scala/2.11.8/scala-2.11.8.tgz"
$
$
$
$
$getValueForKey keys.txt "Zookeeper_Url"
"http://www-eu.apache.org/dist/zookeeper/stable/zookeeper-3.4.10.tar.gz"
$
$
$
$
$
$
$getValueForKey keys.txt "Eclipse_Url"
"http://www.eclipse.org/downloads/download.php?file
$
$
$
$
$
$cat keyvalue.sh
function getValueForKey(){
while read -r line
do
#echo $line
key=`echo $line | cut -d = -f1`
value=`echo $line | cut -d = -f2`
if [ "$2" == "$key" ]; then
echo $value
fi;
done < "$1"
}$
$
$
$
$
You shouldn't use cut at all for this:
getValueForKey(){
while IFS== read -r key value;
do
if [ "$2" = "$key" ]; then
echo "$value"
fi;
done < "$1"
}
read will split the line on the input separator =, and if there are more fields than named variables, it assigns all of the remaining line to the final variable named (in this case, value).
But really you should change your format. At the very least, sort the input and use look to find the values.
William Pursell's helpful answer is an effective pure shell solution, but such solutions are inevitably slow, which is why William recommends a key-sorted configuration file combined with look.
An alternative that doesn't require sorting is to use awk:
getValueForKey() {
awk -F= -v key="$2" '$1 == key { sub(/^[^=]+=/, ""); print }' "$1"
}
-F= splits each line into fields by all occurrences of =
We can still use $1, the 1st field (key field), to compare it to the key value of interest.
To output the corresponding value, however, a string substitution is used to ensure that the value is output as-is, even if it contains = instances:
sub(/^[^=]+=/, "") replaces (sub()) everything from the start of the line (^) up to the first = instance ([^=]+ matches a nonempty sequence (+) of characters other than (^) = followed by =) with the empty string, leaving just the value.
Sample call:
$ getValueForKey keys.txt 'Eclipse_Url'
"http://www.eclipse.org/downloads/download.php?file=/technology/epp/downloads/release/neon/3/eclipse-jee-neon-3-win32-x86_64.zip&mirror_id=1135"
I have a file that contains 10 lines with this sort of content:
aaaa,bbb,132,a.g.n.
I wanna walk throw every line, char by char and put the data before the " , " is met in an output file.
if [ $# -eq 2 ] && [ -f $1 ]
then
echo "Read nr of fields to be saved or nr of commas."
read n
nrLines=$(wc -l < $1)
while $nrLines!="1" read -r line || [[ -n "$line" ]]; do
do
for (( i=1; i<=$n; ++i ))
do
while [ read -r -n1 temp ]
do
if [ temp != "," ]
then
echo $temp > $(result$i)
else
fi
done
paste -d"\n" $2 $(result$i)
done
nrLines=$($nrLines-1)
done
else
echo "File not found!"
fi
}
In parameter $2 I have an empty file in which I will store the data from file $1 after I extract it without the " , " and add a couple of comments.
Example:
My input_file contains:
a.b.c.d,aabb,comp,dddd
My output_file is empty.
I call my script: ./script.sh input_file output_file
After execution the output_file contains:
First line info: a.b.c.d
Second line info: aabb
Third line info: comp
(yes, without the 4th line info)
You can do what you want very simply with parameter-expansion and substring-removal using bash alone. For example, take an example file:
$ cat dat/10lines.txt
aaaa,bbb,132,a.g.n.
aaaa,bbb,133,a.g.n.
aaaa,bbb,134,a.g.n.
aaaa,bbb,135,a.g.n.
aaaa,bbb,136,a.g.n.
aaaa,bbb,137,a.g.n.
aaaa,bbb,138,a.g.n.
aaaa,bbb,139,a.g.n.
aaaa,bbb,140,a.g.n.
aaaa,bbb,141,a.g.n.
A simple one-liner using native bash string handling could simply be the following and give the following results:
$ while read -r line; do echo ${line%,*}; done <dat/10lines.txt
aaaa,bbb,132
aaaa,bbb,133
aaaa,bbb,134
aaaa,bbb,135
aaaa,bbb,136
aaaa,bbb,137
aaaa,bbb,138
aaaa,bbb,139
aaaa,bbb,140
aaaa,bbb,141
Paremeter expansion w/substring removal works as follows:
var=aaaa,bbb,132,a.g.n.
Beginning at the left and removing up to, and including, the first ',' is:
${var#*,} # bbb,132,a.g.n.
Beginning at the left and removing up to, and including, the last ',' is:
${var##*,} # a.g.n.
Beginning at the right and removing up to, and including, the first ',' is:
${var%,*} # aaaa,bbb,132
Beginning at the left and removing up to, and including, the last ',' is:
${var%%,*} # aaaa
Note: the text to remove above is represented with a wildcard '*', but wildcard use is not required. It can be any allowable text. For example, to only remove ,a.g.n where the preceding number is 136, you can do the following:
${var%,136*},136 # aaaa,bbb,136 (all others unchanged)
To print 2016 th line from a file named file.txt u have to run a command like this-
sed -n '2016p' < file.txt
More-
sed -n '2p' < file.txt
will print 2nd line
sed -n '2011p' < file.txt
2011th line
sed -n '10,33p' < file.txt
line 10 up to line 33
sed -n '1p;3p' < file.txt
1st and 3th line
and so on...
For more detail, please have a look in this tutorial and this answer.
In native bash the following should do what you want, assuming you replace the contents of your script.sh with the below:
#!/bin/bash
IN_FILE=${1}
OUT_FILE=${2}
IFS=\,
while read line; do
set -- ${line}
for ((i=1; i<=${#}; i++)); do
((${i}==4)) && continue
((n+=1))
printf '%s\n' "Line ${n} info: ${!i}"
done
done < ${IN_FILE} > ${OUT_FILE}
This will not print the 4th field of each line within the input file, on a new line in the output file (I assume this is your requirement as per your comment?).
[wspace#wspace sandbox]$ awk -F"," 'BEGIN{OFS="\n"}{for(i=1; i<=NF-1; i++){print "line Info: "$i}}' data.txt
line Info: a.b.c.d
line Info: aabb
line Info: comp
This little snippet can ignore the last field.
updated:
#!/usr/bin/env bash
if [ ! -f "$1" -o $# -ne 2 ];then
echo "Usage: $(basename $0) input_file out_file"
exit 127
fi
input_file=$1
output_file=$2
: > $output_file
if [ "$(wc -l < $1)" -ne 0 ];then
while true
do
read -r -n1 char
if [ "$char" == "" ];then
break
elif [ $char != "," ];then
temp=$temp$char
else
echo "line info: $temp" >> $output_file
temp=""
fi
done < $input_file
else
echo "file $1 is empty"
fi
Maybe this is what you want
Did you try
sed "s|,|\n|g" $1 | head -n -1 > $2
I assume that only the last word would not have a comma on its right.
Try this (tested with you sample line) :
#!/bin/bash
# script.sh
echo "Number of fields to save ?"
read nf
while IFS=$',' read -r -a arr; do
newarr=${arr[#]:0:${nf}}
done < "$1"
for i in ${newarr[#]};do
printf "%s\n" $i
done > "$2"
Execute script with :
$ ./script.sh inputfile outputfile
Number of fields ?
3
$ cat outputfile
a.b.c.d
aabb
comp
All words separated with commas are stored into an array $arr
A tmp array $newarr removes last $n element ($n get the read command).
It loops over new array and prints result in $2, the outputfile.
I have a log file which i need to parse it into multiple files.
############################################################################################
6610
############################################################################################
GTI02152 I gtirreqi 20130906 000034 TC SJ014825 GTT_E_REQ_INF テーブル挿入件数 16件
############################################################################################
Z5000
############################################################################################
GTP10000 I NIPS gtgZ5000 20130906 000054 TC SJ014825 シェル開始
############################################################################################
I need to create files like 6610.txt which will have all values under 6610 like(GTI02152..) and for z5000(GTP10000) respectively. Any help will be greatly appreciated!
Below script would help you to get the information. You can modify them to create the data you require.
#!/bin/sh
cmd=`cat data.dat | paste -d, - - - - - | cut -d ',' -f 2,4 > file.out`
$cmd
while read p; do
fileName=`echo $p | cut -d ',' -f 1`
echo $fileName
dataInfo=`echo $p | cut -d ',' -f 2`
echo $dataInfo
done< file.out
Here's an awk styled answer:
I put the following into a file named awko and chmod +x it to use it:
#!/usr/bin/awk -f
BEGIN { p = 0 } # look for filename flag - start at zero
/^\#/ { p = !p } # turn it on to find the filename
# either make a filename or write to the last filename based on the flag
$0 !~ /^\#/ {
if( p == 1 ) filename = $1 ".txt"
else print $0 > filename
}
Running awko data.txt produced two files, 6610.txt and Z5000.txt from your example data. It's capable of sending more data lines to the output files as well.
You can do it with Ruby as well:
ruby -e 'File.read(ARGV.shift).scan(/^[^#].*?(?=^[#])/m).each{|e| name = e.split[0]; File.write("#{name}.txt", e)}' file
Example output:
> for A in *.txt; do echo "---- $A ----"; cat "$A"; done
---- 6610.txt ----
6610
---- GTI02152.txt ----
GTI02152 I gtirreqi 20130906 000034 TC SJ014825 GTT_E_REQ_INF テーブル挿入件数 16件
---- GTP10000.txt ----
GTP10000 I NIPS gtgZ5000 20130906 000054 TC SJ014825 シェル開始
---- Z5000.txt ----
Z5000
This script makes the following assumptions:
Each record is separated by an empty line
#### lines are purely comment/space filler and can be ignored during parsing
The first line of each record (ignoring ####) contains the basename for the filename
The name of the logfile is passed as the first argument to this script.
#!/bin/bash
# write records to this temporary file, rename later
tempfile=$(mktemp)
while read line; do
if [[ $line == "" ]] ; then
# line is empty - separator - save existing record and start a new one
mv $tempfile $filename
filename=""
tempfile=$(mktemp)
else
# output non-empty line to record file
echo $line >> $tempfile
if [[ $filename == "" ]] ; then
# we haven't yet figured out the filename for this record
if [[ $line =~ ^#+$ ]] ; then
# ignore #### comment lines
:
else
# 1st non-comment line in record is filename
filename=${line}.txt
fi
fi
fi
done < $1
# end of input file might not have explicit empty line separator -
# make sure last record file is moved correctly
if [[ -e $tempfile ]] ; then
mv $tempfile $filename
fi
I'm writing a bash script to modify a config file which contains a bunch of key/value pairs. How can I read the key and find the value and possibly modify it?
A wild stab in the dark for modifying a single value:
sed -c -i "s/\($TARGET_KEY *= *\).*/\1$REPLACEMENT_VALUE/" $CONFIG_FILE
assuming that the target key and replacement value don't contain any special regex characters, and that your key-value separator is "=". Note, the -c option is system dependent and you may need to omit it for sed to execute.
For other tips on how to do similar replacements (e.g., when the REPLACEMENT_VALUE has '/' characters in it), there are some great examples here.
Hope this helps someone. I created a self contained script, which required config processing of sorts.
#!/bin/bash
CONFIG="/tmp/test.cfg"
# Use this to set the new config value, needs 2 parameters.
# You could check that $1 and $2 is set, but I am lazy
function set_config(){
sudo sed -i "s/^\($1\s*=\s*\).*\$/\1$2/" $CONFIG
}
# INITIALIZE CONFIG IF IT'S MISSING
if [ ! -e "${CONFIG}" ] ; then
# Set default variable value
sudo touch $CONFIG
echo "myname=\"Test\"" | sudo tee --append $CONFIG
fi
# LOAD THE CONFIG FILE
source $CONFIG
echo "${myname}" # SHOULD OUTPUT DEFAULT (test) ON FIRST RUN
myname="Erl"
echo "${myname}" # SHOULD OUTPUT Erl
set_config myname $myname # SETS THE NEW VALUE
Assuming that you have a file of key=value pairs, potentially with spaces around the =, you can delete, modify in-place or append key-value pairs at will using awk even if the keys or values contain special regex sequences:
# Using awk to delete, modify or append keys
# In case of an error the original configuration file is left intact
# Also leaves a timestamped backup copy (omit the cp -p if none is required)
CONFIG_FILE=file.conf
cp -p "$CONFIG_FILE" "$CONFIG_FILE.orig.`date \"+%Y%m%d_%H%M%S\"`" &&
awk -F '[ \t]*=[ \t]*' '$1=="keytodelete" { next } $1=="keytomodify" { print "keytomodify=newvalue" ; next } { print } END { print "keytoappend=value" }' "$CONFIG_FILE" >"$CONFIG_FILE~" &&
mv "$CONFIG_FILE~" "$CONFIG_FILE" ||
echo "an error has occurred (permissions? disk space?)"
sed "/^$old/s/\(.[^=]*\)\([ \t]*=[ \t]*\)\(.[^=]*\)/\1\2$replace/" configfile
So I can not take any credit for this as it is a combination of stackoverflow answers and help from irc.freenode.net #bash channel but here are bash functions now to both set and read config file values:
# https://stackoverflow.com/a/2464883
# Usage: config_set filename key value
function config_set() {
local file=$1
local key=$2
local val=${#:3}
ensureConfigFileExists "${file}"
# create key if not exists
if ! grep -q "^${key}=" ${file}; then
# insert a newline just in case the file does not end with one
printf "\n${key}=" >> ${file}
fi
chc "$file" "$key" "$val"
}
function ensureConfigFileExists() {
if [ ! -e "$1" ] ; then
if [ -e "$1.example" ]; then
cp "$1.example" "$1";
else
touch "$1"
fi
fi
}
# thanks to ixz in #bash on irc.freenode.net
function chc() { gawk -v OFS== -v FS== -e 'BEGIN { ARGC = 1 } $1 == ARGV[2] { print ARGV[4] ? ARGV[4] : $1, ARGV[3]; next } 1' "$#" <"$1" >"$1.1"; mv "$1"{.1,}; }
# https://unix.stackexchange.com/a/331965/312709
# Usage: local myvar="$(config_get myvar)"
function config_get() {
val="$(config_read_file ${CONFIG_FILE} "${1}")";
if [ "${val}" = "__UNDEFINED__" ]; then
val="$(config_read_file ${CONFIG_FILE}.example "${1}")";
fi
printf -- "%s" "${val}";
}
function config_read_file() {
(grep -E "^${2}=" -m 1 "${1}" 2>/dev/null || echo "VAR=__UNDEFINED__") | head -n 1 | cut -d '=' -f 2-;
}
at first I was using the accepted answer's sed solution: https://stackoverflow.com/a/2464883/2683059
however if the value has a / char it breaks
in general it's easy to extract the info with grep and cut:
cat "$FILE" | grep "^${KEY}${DELIMITER}" | cut -f2- -d"$DELIMITER"
to update you could do something like this:
mv "$FILE" "$FILE.bak"
cat "$FILE.bak" | grep -v "^${KEY}${DELIMITER}" > "$FILE"
echo "${KEY}${DELIMITER}${NEWVALUE}" >> "$FILE"
this would not maintain the order of the key-value pairs obviously. add error checking to make sure you don't lose your data.
I have done this:
new_port=$1
sed "s/^port=.*/port=$new_port/" "$CONFIG_FILE" > /yourPath/temp.x
mv /yourPath/temp.x "$CONFIG_FILE"
This will change port= to port=8888 in your config file if you choose 8888 as $1 for example.
Suppose your config file is in below format:
CONFIG_NUM=4
CONFIG_NUM2=5
CONFIG_DEBUG=n
In your bash script, you can use:
CONFIG_FILE=your_config_file
. $CONFIG_FILE
if [ $CONFIG_DEBUG == "y" ]; then
......
else
......
fi
$CONFIG_NUM, $CONFIG_NUM2, $CONFIG_DEBUG is what you need.
After your read the values, write it back will be easy:
echo "CONFIG_DEBUG=y" >> $CONFIG_FILE