Related
I need to add double quotes to the csv file. My sample data is like this..
378478,COMPLETED,Tracfone,,,"2020/03/29 09:39:22",,2787,,356074101197544,89148000005748235454,75176540
378328,COMPLETED,"Total Wireless","Unlimited Talk, Text, & Data (First 25GB High Speed, then unlimited 2GB)",50,"2020/03/29 06:10:01",200890899011202395,0899,0279395,356058102052972,89148000005117597971,67756296
I have tried some code available online with awk and sed, it is resulting as below , Error - **First digit in the number is being trimmed like for ex. in '378478' it is only displaying '78478'.
Also it is adding double quotes to already existing double quotes too!** nothing seems to be perfectly working. Please guide me!
"78478","COMPLETED","Tracfone","","",""2020/03/29 09:39:22"","","2787","","356074101197544","89148000005748235454","75176540"
"78328","COMPLETED",""Total Wireless"",""Unlimited Talk"," Text"," & Data (First 25GB High Speed"," then unlimited 2GB)"","50",""2020/03/29 06:10:01"","200890899011202395","0899","0279395","356058102052972","89148000005117597971","67756296"
"78329","COMPLETED",""Cricket Wireless"",""Unlimited Talk"," Text"," & 4G LTE Data w/ 15GB Hotspot"","60",""2020/03/29""
This is the code I am using:
awk -F"'?,'?" -v OFS='","' '{$1=$1; gsub(/^.|$/,"\"")} 1' file # or
sed -E 's/([^,]*) , (.*)/"\1" , "\2"/' file
My total code is the below one. my Intention was to first convert all .xlsx to .csv and then add double quotes to same csv and save it in the same file.i know the $file.csv part is wrong, hence i need some help
find "$Src_Dir" -type f -iname "*.xlsx" -print>path/temp
cat path/temp | while IFS="" read -r -d $'\0' file;
do
echo $file
ssconvert "${file}" --export-type=Gnumeric_stf:stf_csv
awk -F"'?,'?" -v OFS='","' '{$1=$1; gsub(/^.|$/,"\"")} 1' $file > $file.csv
done
If you want to handle anything other than the simplest CSV files, you should probably move away from sed and awk. There are much better tools available.
For example, if you sudo apt install csvtool (or equivalent) on your favourite distro, you can use its call-per-line functionality to process each line in the input file. See the following script for an example:
#!/bin/bash
function quotify {
# Start empty line, process every field.
line=""
while [[ $# -ne 0 ]] ; do
# Append comma for all but first field, then quoted field.
[[ -n "${line}" ]] && line="${line},"
line="${line}\"$1\""
shift
done
# Output the fully quoted line.
echo "${line}"
}
# Needed to call functions. Also, ensure link: /bin/sh -> /bin/bash.
export -f quotify
# Pretty-print input and output.
echo "Input file:"
sed 's/^/ /' inputFile.csv
echo "Output file:"
csvtool call quotify inputFile.csv | sed 's/^/ /'
Note the quotify function which is called for each line in the CSV file, with the arguments set to each field within that line (sans quotes, whether the original fields had quotes or not).
It basically constructs a string of all the fields in the line, with quotes around them, then writes that to standard output, as shown below in the output from that script:
Input file:
378478,COMPLETED,Tracfone,,,"2020/03/29 09:39:22",,2787,,356074101197544,89148000005748235454,75176540
378328,COMPLETED,"Total Wireless","Unlimited Talk, Text, & Data (First 25GB High Speed, then unlimited 2GB)",50,"2020/03/29"
Output file:
"378478","COMPLETED","Tracfone","","","2020/03/29 09:39:22","","2787","","356074101197544","89148000005748235454","75176540"
"378328","COMPLETED","Total Wireless","Unlimited Talk, Text, & Data (First 25GB High Speed, then unlimited 2GB)","50","2020/03/29"
Even though using a separate tool is probably the easiest way to go, if you absolutely cannot install other packages, then you're going to have to code up something in a package you already have. The following bash script is a good place to start, as it uses no other tools to achieve its goal.
At the moment, it's tied to a very specific set of rules, as follows:
White space matters. Anything between the commas is considered part of the field. This especially matters when detecting a quoted field, it must have the quote as the first character, no abc, "d,e,f",ghi stuff since the "d,e,f" won't be handled correctly.
Quoted fields are allowed to contain commas, and "" sequences within them are turned into ".
It's probably not a good idea to supply ill-formatted CSV files :-)
But, with that in mind, here we go. I'll offer a brief textual description of each section but hopefully the comments in the code will be enough to figure out what's going on.
First, a function for finding the position if some string within another string, useful for working out the field bounds:
function findPos {
haystack="$1"
needle="$2"
# Remove everything past the needle.
prefix="${haystack%%${needle}*}"
# If nothing was removed, it wasn't found, so supply massive number.
# Otherwise, it was found at the length of the string with removed stuff.
position=999999
[[ ${#prefix} -ne ${#haystack} ]] && position=${#prefix}
echo ${position}
}
Then we can use that in the function that works out the length of the next field. This basically just looks for the next comma for unquoted fields, and does special handling for quoted fields by building up the field from segments (it has to handle quotes within quotes and commas):
function getNextFieldLen {
line="$1"
# Empty line means all work done.
[[ -z "${line}" ]] && echo -1 && return
# Handle unquoted first, this is easy.
[[ "${line:0:1}" != '"' ]] && { echo $(findPos "${line}" ","); return; }
# Now handle quoted. Loop over all segments where a segment is defined as
# the text up to the next <"">, assuming it's before the next <",>.
field=""
nextQuoteComma=$(findPos "${line}" '",')
nextDoubleQuote=$(findPos "${line}" '""')
while [[ ${nextDoubleQuote} -lt ${nextQuoteComma} ]]; do
# Append segment to the field and go back for next segment.
field="${field}${line:0:${nextDoubleQuote}}\"\""
line="${line:${nextDoubleQuote}}"
line="${line:2}"
nextQuoteComma=$(findPos "${line}" '",')
nextDoubleQuote=$(findPos "${line}" '""')
done
# Add final segment (up to the comma) and output entire field.
field="${field}${line:0:${nextQuoteComma}}\""
echo "${#field}"
}
Finally, there's the top-level function which will quotify whatever comes in via standard input:
function quotifyStdIn {
# Process file line by line.
while read -r line; do
# Start with empty output line and non-comma separator.
outLine="" ; sep=""
# Place terminator to make processing easier, start field loop.
line="${line},"
fieldLen=$(getNextFieldLen "${line}")
while [[ ${fieldLen} -ge 0 ]]; do
# Get field and quotify if needed, adjust line (remove field and comma).
field="${line:0:${fieldLen}}"
[[ "${field:0:1}" = '"' ]] || field="\"${field}\""
line="${line:$((fieldLen+1))}"
#line="${line:${fieldLen}}"
#line="${line:1}"
# Append to output line and prepare for next field.
outLine="${outLine}${sep}${field}"; sep=","
fieldLen=$(getNextFieldLen "${line}")
done
# Output built line.
echo "${outLine}"
done
}
And, on the off-chance you want to read directly from a file (though providing a file name that's empty or "-" will use standard input so you can probably just use the file-based function for everything):
function quotifyFile {
file="$1"
# Empty file or "-" means standard input, otherwise take input from real file.
[[ ${#file} -eq 0 ]] && { quotifyStdIn; return; }
[[ "${file}" = "-" ]] && { quotifyStdIn; return; }
quotifyStdIn < "${file}"
}
And, finally, because every program that's not a "Hello, world" one deserves some form of test harness, this is what you can use to test the various capabilities:
(
echo 'paxdiablo,was here'
echo 'and,"then, strangely,",he,was,not'
echo '50,"My name is ""Pax"", and yours is ""Bob""",42'
echo '17,"""Love"" is grand",19'
) > harness.csv
echo "Before:"
sed "s/^/ /" harness.csv
echo "After:"
quotifyFile harness.csv | sed "s/^/ /"
rm -rf harness.csv
And, since a test harness is of little use unless you run the tests, here's the results of the first run:
Before:
paxdiablo,was here
and,"then, strangely,",he,was,not
50,"My name is ""Pax"", and yours is ""Bob""",42
17,"""Love"" is grand",19
After:
"paxdiablo","was here"
"and","then, strangely,","he","was","not"
"50","My name is ""Pax"", and yours is ""Bob""","42"
"17","""Love"" is grand","19"
Hopefully, that will be enough to get you going in the absence of being able to install packages. Of course, if one of the packages you can't install in bash itself, then you have problems that I can't help you with :-)
Your starting CSV is not a good CSV: the 2 rows have different number of columns
+--------+-----------+----------------+--------------------------------------------------------------------------+----+---------------------+---+------+---+-----------------+----------------------+----------+
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 |
+--------+-----------+----------------+--------------------------------------------------------------------------+----+---------------------+---+------+---+-----------------+----------------------+----------+
| 378478 | COMPLETED | Tracfone | - | - | 2020/03/29 09:39:22 | - | 2787 | - | 356074101197544 | 89148000005748235454 | 75176540 |
| 378328 | COMPLETED | Total Wireless | Unlimited Talk, Text, & Data (First 25GB High Speed, then unlimited 2GB) | 50 | 2020/03/29 | - | - | - | - | - | - |
+--------+-----------+----------------+--------------------------------------------------------------------------+----+---------------------+---+------+---+-----------------+----------------------+----------+
Using Miller (https://github.com/johnkerl/miller) you could run
mlr --csv --quote-all -N unsparsify input >output
to have
"378478","COMPLETED","Tracfone","","","2020/03/29 09:39:22","","2787","","356074101197544","89148000005748235454","75176540"
"378328","COMPLETED","Total Wireless","Unlimited Talk, Text, & Data (First 25GB High Speed, then unlimited 2GB)","50","2020/03/29","","","","","",""
You can use it downloading the executable https://github.com/johnkerl/miller/releases/tag/v5.7.0
I have a parameters.ini file, such as:
[parameters.ini]
database_user = user
database_version = 20110611142248
I want to read in and use the database version specified in the parameters.ini file from within a bash shell script so I can process it.
#!/bin/sh
# Need to get database version from parameters.ini file to use in script
php app/console doctrine:migrations:migrate $DATABASE_VERSION
How would I do this?
How about grepping for that line then using awk
version=$(awk -F "=" '/database_version/ {print $2}' parameters.ini)
You can use bash native parser to interpret ini values, by:
$ source <(grep = file.ini)
Sample file:
[section-a]
var1=value1
var2=value2
IPS=( "1.2.3.4" "1.2.3.5" )
To access variables, you simply printing them: echo $var1. You may also use arrays as shown above (echo ${IPS[#]}).
If you only want a single value just grep for it:
source <(grep var1 file.ini)
For the demo, check this recording at asciinema.
It is simple as you don't need for any external library to parse the data, but it comes with some disadvantages. For example:
If you have spaces between = (variable name and value), then you've to trim the spaces first, e.g.
$ source <(grep = file.ini | sed 's/ *= */=/g')
Or if you don't care about the spaces (including in the middle), use:
$ source <(grep = file.ini | tr -d ' ')
To support ; comments, replace them with #:
$ sed "s/;/#/g" foo.ini | source /dev/stdin
The sections aren't supported (e.g. if you've [section-name], then you've to filter it out as shown above, e.g. grep =), the same for other unexpected errors.
If you need to read specific value under specific section, use grep -A, sed, awk or ex).
E.g.
source <(grep = <(grep -A5 '\[section-b\]' file.ini))
Note: Where -A5 is the number of rows to read in the section. Replace source with cat to debug.
If you've got any parsing errors, ignore them by adding: 2>/dev/null
See also:
How to parse and convert ini file into bash array variables? at serverfault SE
Are there any tools for modifying INI style files from shell script
Sed one-liner, that takes sections into account. Example file:
[section1]
param1=123
param2=345
param3=678
[section2]
param1=abc
param2=def
param3=ghi
[section3]
param1=000
param2=111
param3=222
Say you want param2 from section2. Run the following:
sed -nr "/^\[section2\]/ { :l /^param2[ ]*=/ { s/[^=]*=[ ]*//; p; q;}; n; b l;}" ./file.ini
will give you
def
Bash does not provide a parser for these files. Obviously you can use an awk command or a couple of sed calls, but if you are bash-priest and don't want to use any other shell, then you can try the following obscure code:
#!/usr/bin/env bash
cfg_parser ()
{
ini="$(<$1)" # read the file
ini="${ini//[/\[}" # escape [
ini="${ini//]/\]}" # escape ]
IFS=$'\n' && ini=( ${ini} ) # convert to line-array
ini=( ${ini[*]//;*/} ) # remove comments with ;
ini=( ${ini[*]/\ =/=} ) # remove tabs before =
ini=( ${ini[*]/=\ /=} ) # remove tabs after =
ini=( ${ini[*]/\ =\ /=} ) # remove anything with a space around =
ini=( ${ini[*]/#\\[/\}$'\n'cfg.section.} ) # set section prefix
ini=( ${ini[*]/%\\]/ \(} ) # convert text2function (1)
ini=( ${ini[*]/=/=\( } ) # convert item to array
ini=( ${ini[*]/%/ \)} ) # close array parenthesis
ini=( ${ini[*]/%\\ \)/ \\} ) # the multiline trick
ini=( ${ini[*]/%\( \)/\(\) \{} ) # convert text2function (2)
ini=( ${ini[*]/%\} \)/\}} ) # remove extra parenthesis
ini[0]="" # remove first element
ini[${#ini[*]} + 1]='}' # add the last brace
eval "$(echo "${ini[*]}")" # eval the result
}
cfg_writer ()
{
IFS=' '$'\n'
fun="$(declare -F)"
fun="${fun//declare -f/}"
for f in $fun; do
[ "${f#cfg.section}" == "${f}" ] && continue
item="$(declare -f ${f})"
item="${item##*\{}"
item="${item%\}}"
item="${item//=*;/}"
vars="${item//=*/}"
eval $f
echo "[${f#cfg.section.}]"
for var in $vars; do
echo $var=\"${!var}\"
done
done
}
Usage:
# parse the config file called 'myfile.ini', with the following
# contents::
# [sec2]
# var2='something'
cfg.parser 'myfile.ini'
# enable section called 'sec2' (in the file [sec2]) for reading
cfg.section.sec2
# read the content of the variable called 'var2' (in the file
# var2=XXX). If your var2 is an array, then you can use
# ${var[index]}
echo "$var2"
Bash ini-parser can be found at The Old School DevOps blog site.
Just include your .ini file into bash body:
File example.ini:
DBNAME=test
DBUSER=scott
DBPASSWORD=tiger
File example.sh
#!/bin/bash
#Including .ini file
. example.ini
#Test
echo "${DBNAME} ${DBUSER} ${DBPASSWORD}"
All of the solutions I've seen so far also hit on commented out lines. This one didn't, if the comment code is ;:
awk -F '=' '{if (! ($0 ~ /^;/) && $0 ~ /database_version/) print $2}' file.ini
You may use crudini tool to get ini values, e.g.:
DATABASE_VERSION=$(crudini --get parameters.ini '' database_version)
one of more possible solutions
dbver=$(sed -n 's/.*database_version *= *\([^ ]*.*\)/\1/p' < parameters.ini)
echo $dbver
Display the value of my_key in an ini-style my_file:
sed -n -e 's/^\s*my_key\s*=\s*//p' my_file
-n -- do not print anything by default
-e -- execute the expression
s/PATTERN//p -- display anything following this pattern
In the pattern:
^ -- pattern begins at the beginning of the line
\s -- whitespace character
* -- zero or many (whitespace characters)
Example:
$ cat my_file
# Example INI file
something = foo
my_key = bar
not_my_key = baz
my_key_2 = bing
$ sed -n -e 's/^\s*my_key\s*=\s*//p' my_file
bar
So:
Find a pattern where the line begins with zero or many whitespace characters,
followed by the string my_key, followed by zero or many whitespace characters, an equal sign, then zero or many whitespace characters again. Display the rest of the content on that line following that pattern.
Similar to the other Python answers, you can do this using the -c flag to execute a sequence of Python statements given on the command line:
$ python3 -c "import configparser; c = configparser.ConfigParser(); c.read('parameters.ini'); print(c['parameters.ini']['database_version'])"
20110611142248
This has the advantage of requiring only the Python standard library and the advantage of not writing a separate script file.
Or use a here document for better readability, thusly:
#!/bin/bash
python << EOI
import configparser
c = configparser.ConfigParser()
c.read('params.txt')
print c['chassis']['serialNumber']
EOI
serialNumber=$(python << EOI
import configparser
c = configparser.ConfigParser()
c.read('params.txt')
print c['chassis']['serialNumber']
EOI
)
echo $serialNumber
sed
You can use sed to parse the ini configuration file, especially when you've section names like:
# last modified 1 April 2001 by John Doe
[owner]
name=John Doe
organization=Acme Widgets Inc.
[database]
# use IP address in case network name resolution is not working
server=192.0.2.62
port=143
file=payroll.dat
so you can use the following sed script to parse above data:
# Configuration bindings found outside any section are given to
# to the default section.
1 {
x
s/^/default/
x
}
# Lines starting with a #-character are comments.
/#/n
# Sections are unpacked and stored in the hold space.
/\[/ {
s/\[\(.*\)\]/\1/
x
b
}
# Bindings are unpacked and decorated with the section
# they belong to, before being printed.
/=/ {
s/^[[:space:]]*//
s/[[:space:]]*=[[:space:]]*/|/
G
s/\(.*\)\n\(.*\)/\2|\1/
p
}
this will convert the ini data into this flat format:
owner|name|John Doe
owner|organization|Acme Widgets Inc.
database|server|192.0.2.62
database|port|143
database|file|payroll.dat
so it'll be easier to parse using sed, awk or read by having section names in every line.
Credits & source: Configuration files for shell scripts, Michael Grünewald
Alternatively, you can use this project: chilladx/config-parser, a configuration parser using sed.
For people (like me) looking to read INI files from shell scripts (read shell, not bash) - I've knocked up the a little helper library which tries to do exactly that:
https://github.com/wallyhall/shini (MIT license, do with it as you please. I've linked above including it inline as the code is quite lengthy.)
It's somewhat more "complicated" than the simple sed lines suggested above - but works on a very similar basis.
Function reads in a file line-by-line - looking for section markers ([section]) and key/value declarations (key=value).
Ultimately you get a callback to your own function - section, key and value.
Here is my version, which parses sections and populates a global associative array g_iniProperties with it.
Note that this works only with bash v4.2 and higher.
function parseIniFile() { #accepts the name of the file to parse as argument ($1)
#declare syntax below (-gA) only works with bash 4.2 and higher
unset g_iniProperties
declare -gA g_iniProperties
currentSection=""
while read -r line
do
if [[ $line = [* ]] ; then
if [[ $line = [* ]] ; then
currentSection=$(echo $line | sed -e 's/\r//g' | tr -d "[]")
fi
else
if [[ $line = *=* ]] ; then
cleanLine=$(echo $line | sed -e 's/\r//g')
key=$currentSection.$(echo $cleanLine | awk -F: '{ st = index($0,"=");print substr($0,0,st-1)}')
value=$(echo $cleanLine | awk -F: '{ st = index($0,"=");print substr($0,st+1)}')
g_iniProperties[$key]=$value
fi
fi;
done < $1
}
And here is a sample code using the function above:
parseIniFile "/path/to/myFile.ini"
for key in "${!g_iniProperties[#]}"; do
echo "Found key/value $key = ${g_iniProperties[$key]}"
done
Yet another implementation using awk with a little more flexibility.
function parse_ini() {
cat /dev/stdin | awk -v section="$1" -v key="$2" '
BEGIN {
if (length(key) > 0) { params=2 }
else if (length(section) > 0) { params=1 }
else { params=0 }
}
match($0,/#/) { next }
match($0,/^\[(.+)\]$/){
current=substr($0, RSTART+1, RLENGTH-2)
found=current==section
if (params==0) { print current }
}
match($0,/(.+)=(.+)/) {
if (found) {
if (params==2 && key==$1) { print $3 }
if (params==1) { printf "%s=%s\n",$1,$3 }
}
}'
}
You can use calling passing between 0 and 2 params:
cat myfile1.ini myfile2.ini | parse_ini # List section names
cat myfile1.ini myfile2.ini | parse_ini 'my-section' # Prints keys and values from a section
cat myfile1.ini myfile2.ini | parse_ini 'my-section' 'my-key' # Print a single value
complex simplicity
ini file
test.ini
[section1]
name1=value1
name2=value2
[section2]
name1=value_1
name2 = value_2
bash script with read and execute
/bin/parseini
#!/bin/bash
set +a
while read p; do
reSec='^\[(.*)\]$'
#reNV='[ ]*([^ ]*)+[ ]*=(.*)' #Remove only spaces around name
reNV='[ ]*([^ ]*)+[ ]*=[ ]*(.*)' #Remove spaces around name and spaces before value
if [[ $p =~ $reSec ]]; then
section=${BASH_REMATCH[1]}
elif [[ $p =~ $reNV ]]; then
sNm=${section}_${BASH_REMATCH[1]}
sVa=${BASH_REMATCH[2]}
set -a
eval "$(echo "$sNm"=\""$sVa"\")"
set +a
fi
done < $1
then in another script I source the results of the command and can use any variables within
test.sh
#!/bin/bash
source parseini test.ini
echo $section2_name2
finally from command line the output is thus
# ./test.sh
value_2
Some of the answers don't respect comments. Some don't respect sections. Some recognize only one syntax (only ":" or only "="). Some Python answers fail on my machine because of differing captialization or failing to import the sys module. All are a bit too terse for me.
So I wrote my own, and if you have a modern Python, you can probably call this from your Bash shell. It has the advantage of adhering to some of the common Python coding conventions, and even provides sensible error messages and help. To use it, name it something like myconfig.py (do NOT call it configparser.py or it may try to import itself,) make it executable, and call it like
value=$(myconfig.py something.ini sectionname value)
Here's my code for Python 3.5 on Linux:
#!/usr/bin/env python3
# Last Modified: Thu Aug 3 13:58:50 PDT 2017
"""A program that Bash can call to parse an .ini file"""
import sys
import configparser
import argparse
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="A program that Bash can call to parse an .ini file")
parser.add_argument("inifile", help="name of the .ini file")
parser.add_argument("section", help="name of the section in the .ini file")
parser.add_argument("itemname", help="name of the desired value")
args = parser.parse_args()
config = configparser.ConfigParser()
config.read(args.inifile)
print(config.get(args.section, args.itemname))
I wrote a quick and easy python script to include in my bash script.
For example, your ini file is called food.ini
and in the file you can have some sections and some lines:
[FRUIT]
Oranges = 14
Apples = 6
Copy this small 6 line Python script and save it as configparser.py
#!/usr/bin/python
import configparser
import sys
config = configparser.ConfigParser()
config.read(sys.argv[1])
print config.get(sys.argv[2],sys.argv[3])
Now, in your bash script you could do this for example.
OrangeQty=$(python configparser.py food.ini FRUIT Oranges)
or
ApplesQty=$(python configparser.py food.ini FRUIT Apples)
echo $ApplesQty
This presupposes:
you have Python installed
you have the configparser library installed (this should come with a std python installation)
Hope it helps
:¬)
The explanation to the answer for the one-liner sed.
[section1]
param1=123
param2=345
param3=678
[section2]
param1=abc
param2=def
param3=ghi
[section3]
param1=000
param2=111
param3=222
sed -nr "/^\[section2\]/ { :l /^\s*[^#].*/ p; n; /^\[/ q; b l; }" ./file.ini
To understand, it will be easier to format the line like this:
sed -nr "
# start processing when we found the word \"section2\"
/^\[section2\]/ { #the set of commands inside { } will be executed
#create a label \"l\" (https://www.grymoire.com/Unix/Sed.html#uh-58)
:l /^\s*[^#].*/ p;
# move on to the next line. For the first run it is the \"param1=abc\"
n;
# check if this line is beginning of new section. If yes - then exit.
/^\[/ q
#otherwise jump to the label \"l\"
b l
}
" file.ini
This script will get parameters as follow :
meaning that if your ini has :
pars_ini.ksh < path to ini file > < name of Sector in Ini file > < the name in name=value to return >
eg. how to call it :
[ environment ]
a=x
[ DataBase_Sector ]
DSN = something
Then calling :
pars_ini.ksh /users/bubu_user/parameters.ini DataBase_Sector DSN
this will retrieve the following "something"
the script "pars_ini.ksh" :
\#!/bin/ksh
\#INI_FILE=path/to/file.ini
\#INI_SECTION=TheSection
\# BEGIN parse-ini-file.sh
\# SET UP THE MINIMUM VARS FIRST
alias sed=/usr/local/bin/sed
INI_FILE=$1
INI_SECTION=$2
INI_NAME=$3
INI_VALUE=""
eval `sed -e 's/[[:space:]]*\=[[:space:]]*/=/g' \
-e 's/;.*$//' \
-e 's/[[:space:]]*$//' \
-e 's/^[[:space:]]*//' \
-e "s/^\(.*\)=\([^\"']*\)$/\1=\"\2\"/" \
< $INI_FILE \
| sed -n -e "/^\[$INI_SECTION\]/,/^\s*\[/{/^[^;].*\=.*/p;}"`
TEMP_VALUE=`echo "$"$INI_NAME`
echo `eval echo $TEMP_VALUE`
This implementation uses awk and has the following advantages:
Will only return the first matching entry
Ignores lines that start with a ;
Trims leading and trailing whitespace, but not internal whitespace
Formatted version:
awk -F '=' '/^\s*database_version\s*=/ {
sub(/^ +/, "", $2);
sub(/ +$/, "", $2);
print $2;
exit;
}' parameters.ini
One-liner:
awk -F '=' '/^\s*database_version\s*=/ { sub(/^ +/, "", $2); sub(/ +$/, "", $2); print $2; exit; }' parameters.ini
You can use a CSV parser xsv as parsing INI data.
cargo install xsv
$ cat /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
$ xsv select -d "=" - <<< "$( cat /etc/*release )" | xsv search --no-headers --select 1 "DISTRIB_CODENAME" | xsv select 2
xenial
or from a file.
$ xsv select -d "=" - file.ini | xsv search --no-headers --select 1 "DISTRIB_CODENAME" | xsv select 2
My version of the one-liner
#!/bin/bash
#Reader for MS Windows 3.1 Ini-files
#Usage: inireader.sh
# e.g.: inireader.sh win.ini ERRORS DISABLE
# would return value "no" from the section of win.ini
#[ERRORS]
#DISABLE=no
INIFILE=$1
SECTION=$2
ITEM=$3
cat $INIFILE | sed -n /^\[$SECTION\]/,/^\[.*\]/p | grep "^[:space:]*$ITEM[:space:]*=" | sed s/.*=[:space:]*//
Just finished writing my own parser. I tried to use various parser found here, none seems to work with both ksh93 (AIX) and bash (Linux).
It's old programming style - parsing line by line. Pretty fast since it used few external commands. A bit slower because of all the eval required for dynamic name of the array.
The ini support 3 special syntaxs:
includefile=ini file -->
Load an additionnal ini file. Useful for splitting ini in multiple files, or re-use some piece of configuration
includedir=directory -->
Same as includefile, but include a complete directory
includesection=section -->
Copy an existing section to the current section.
I used all thoses syntax to have pretty complex, re-usable ini file. Useful to install products when installing a new OS - we do that a lot.
Values can be accessed with ${ini[$section.$item]}. The array MUST be defined before calling this.
Have fun. Hope it's useful for someone else!
function Show_Debug {
[[ $DEBUG = YES ]] && echo "DEBUG $#"
}
function Fatal {
echo "$#. Script aborted"
exit 2
}
#-------------------------------------------------------------------------------
# This function load an ini file in the array "ini"
# The "ini" array must be defined in the calling program (typeset -A ini)
#
# It could be any array name, the default array name is "ini".
#
# There is heavy usage of "eval" since ksh and bash do not support
# reference variable. The name of the ini is passed as variable, and must
# be "eval" at run-time to work. Very specific syntax was used and must be
# understood before making any modifications.
#
# It complexify greatly the program, but add flexibility.
#-------------------------------------------------------------------------------
function Load_Ini {
Show_Debug "$0($#)"
typeset ini_file="$1"
# Name of the array to fill. By default, it's "ini"
typeset ini_array_name="${2:-ini}"
typeset section variable value line my_section file subsection value_array include_directory all_index index sections pre_parse
typeset LF="
"
if [[ ! -s $ini_file ]]; then
Fatal "The ini file is empty or absent in $0 [$ini_file]"
fi
include_directory=$(dirname $ini_file)
include_directory=${include_directory:-$(pwd)}
Show_Debug "include_directory=$include_directory"
section=""
# Since this code support both bash and ksh93, you cannot use
# the syntax "echo xyz|while read line". bash doesn't work like
# that.
# It forces the use of "<<<", introduced in bash and ksh93.
Show_Debug "Reading file $ini_file and putting the results in array $ini_array_name"
pre_parse="$(sed 's/^ *//g;s/#.*//g;s/ *$//g' <$ini_file | egrep -v '^$')"
while read line; do
if [[ ${line:0:1} = "[" ]]; then # Is the line starting with "["?
# Replace [section_name] to section_name by removing the first and last character
section="${line:1}"
section="${section%\]}"
eval "sections=\${$ini_array_name[sections_list]}"
sections="$sections${sections:+ }$section"
eval "$ini_array_name[sections_list]=\"$sections\""
Show_Debug "$ini_array_name[sections_list]=\"$sections\""
eval "$ini_array_name[$section.exist]=YES"
Show_Debug "$ini_array_name[$section.exist]='YES'"
else
variable=${line%%=*} # content before the =
value=${line#*=} # content after the =
if [[ $variable = includefile ]]; then
# Include a single file
Load_Ini "$include_directory/$value" "$ini_array_name"
continue
elif [[ $variable = includedir ]]; then
# Include a directory
# If the value doesn't start with a /, add the calculated include_directory
if [[ $value != /* ]]; then
value="$include_directory/$value"
fi
# go thru each file
for file in $(ls $value/*.ini 2>/dev/null); do
if [[ $file != *.ini ]]; then continue; fi
# Load a single file
Load_Ini "$file" "$ini_array_name"
done
continue
elif [[ $variable = includesection ]]; then
# Copy an existing section into the current section
eval "all_index=\"\${!$ini_array_name[#]}\""
# It's not necessarily fast. Need to go thru all the array
for index in $all_index; do
# Only if it is the requested section
if [[ $index = $value.* ]]; then
# Evaluate the subsection [section.subsection] --> subsection
subsection=${index#*.}
# Get the current value (source section)
eval "value_array=\"\${$ini_array_name[$index]}\""
# Assign the value to the current section
# The $value_array must be resolved on the second pass of the eval, so make sure the
# first pass doesn't resolve it (\$value_array instead of $value_array).
# It must be evaluated on the second pass in case there is special character like $1,
# or ' or " in it (code).
eval "$ini_array_name[$section.$subsection]=\"\$value_array\""
Show_Debug "$ini_array_name[$section.$subsection]=\"$value_array\""
fi
done
fi
# Add the value to the array
eval "current_value=\"\${$ini_array_name[$section.$variable]}\""
# If there's already something for this field, add it with the current
# content separated by a LF (line_feed)
new_value="$current_value${current_value:+$LF}$value"
# Assign the content
# The $new_value must be resolved on the second pass of the eval, so make sure the
# first pass doesn't resolve it (\$new_value instead of $new_value).
# It must be evaluated on the second pass in case there is special character like $1,
# or ' or " in it (code).
eval "$ini_array_name[$section.$variable]=\"\$new_value\""
Show_Debug "$ini_array_name[$section.$variable]=\"$new_value\""
fi
done <<< "$pre_parse"
Show_Debug "exit $0($#)\n"
}
When I use a password in base64, I put the separator ":" because the base64 string may has "=". For example (I use ksh):
> echo "Abc123" | base64
QWJjMTIzCg==
In parameters.ini put the line pass:QWJjMTIzCg==, and finally:
> PASS=`awk -F":" '/pass/ {print $2 }' parameters.ini | base64 --decode`
> echo "$PASS"
Abc123
If the line has spaces like "pass : QWJjMTIzCg== " add | tr -d ' ' to trim them:
> PASS=`awk -F":" '/pass/ {print $2 }' parameters.ini | tr -d ' ' | base64 --decode`
> echo "[$PASS]"
[Abc123]
This uses the system perl and clean regular expressions:
cat parameters.ini | perl -0777ne 'print "$1" if /\[\s*parameters\.ini\s*\][\s\S]*?\sdatabase_version\s*=\s*(.*)/'
The answer of "Karen Gabrielyan" among another answers was the best but in some environments we dont have awk, like typical busybox, i changed the answer by below code.
trim()
{
local trimmed="$1"
# Strip leading space.
trimmed="${trimmed## }"
# Strip trailing space.
trimmed="${trimmed%% }"
echo "$trimmed"
}
function parseIniFile() { #accepts the name of the file to parse as argument ($1)
#declare syntax below (-gA) only works with bash 4.2 and higher
unset g_iniProperties
declare -gA g_iniProperties
currentSection=""
while read -r line
do
if [[ $line = [* ]] ; then
if [[ $line = [* ]] ; then
currentSection=$(echo $line | sed -e 's/\r//g' | tr -d "[]")
fi
else
if [[ $line = *=* ]] ; then
cleanLine=$(echo $line | sed -e 's/\r//g')
key=$(trim $currentSection.$(echo $cleanLine | cut -d'=' -f1'))
value=$(trim $(echo $cleanLine | cut -d'=' -f2))
g_iniProperties[$key]=$value
fi
fi;
done < $1
}
If Python is available, the following will read all the sections, keys and values and save them in variables with their names following the format "[section]_[key]". Python can read .ini files properly, so we make use of it.
#!/bin/bash
eval $(python3 << EOP
from configparser import SafeConfigParser
config = SafeConfigParser()
config.read("config.ini"))
for section in config.sections():
for (key, val) in config.items(section):
print(section + "_" + key + "=\"" + val + "\"")
EOP
)
echo "Environment_type: ${Environment_type}"
echo "Environment_name: ${Environment_name}"
config.ini
[Environment]
type = DEV
name = D01
If using sections, this will do the job :
Example raw output :
$ ./settings
[section]
SETTING_ONE=this is setting one
SETTING_TWO=This is the second setting
ANOTHER_SETTING=This is another setting
Regexp parsing :
$ ./settings | sed -n -E "/^\[.*\]/{s/\[(.*)\]/\1/;h;n;};/^[a-zA-Z]/{s/#.*//;G;s/([^ ]*) *= *(.*)\n(.*)/\3_\1='\2'/;p;}"
section_SETTING_ONE='this is setting one'
section_SETTING_TWO='This is the second setting'
section_ANOTHER_SETTING='This is another setting'
Now all together :
$ eval "$(./settings | sed -n -E "/^\[.*\]/{s/\[(.*)\]/\1/;h;n;};/^[a-zA-Z]/{s/#.*//;G;s/([^ ]*) *= *(.*)\n(.*)/\3_\1='\2'/;p;}")"
$ echo $section_SETTING_TWO
This is the second setting
I have nice one-liner (assuimng you have php and jq installed):
cat file.ini | php -r "echo json_encode(parse_ini_string(file_get_contents('php://stdin'), true, INI_SCANNER_RAW));" | jq '.section.key'
This thread does not have enough solutions to choose from, thus here my solution, it does not require tools like sed or awk :
grep '^\[section\]' -A 999 config.ini | tail -n +2 | grep -B 999 '^\[' | head -n -1 | grep '^key' | cut -d '=' -f 2
If your are to expect sections with more than 999 lines, feel free to adapt the example above. Note that you may want to trim the resulting value, to remove spaces or a comment string after the value. Remove the ^ if you need to match keys that do not start at the beginning of the line, as in the example of the question. Better, match explicitly for white spaces and tabs, in such a case.
If you have multiple values in a given section you want to read, but want to avoid reading the file multiple times:
CONFIG_SECTION=$(grep '^\[section\]' -A 999 config.ini | tail -n +2 | grep -B 999 '^\[' | head -n -1)
KEY1=$(echo ${CONFIG_SECTION} | tr ' ' '\n' | grep key1 | cut -d '=' -f 2)
echo "KEY1=${KEY1}"
KEY2=$(echo ${CONFIG_SECTION} | tr ' ' '\n' | grep key2 | cut -d '=' -f 2)
echo "KEY2=${KEY2}"
I have a VAR which contain :
list_data="toto
titi
tata
tete"
My array can contain like this example or more values then 3 data :
arraytest[0]="Hello|Test|env|tata|POLO|GER|GO|"
arraytest[1]="GOODNIGHT|Test2|env2|tete|GOLF|ITA|NOTGO|"
arraytest[2]="AFTER|Test3|env3|JAJA|CIT|FRA|GO|"
and my string is
string="INSERT"
What I want to do is to add special string in the end of each line where every line contain each value of list_data
For example :
arraytest[0]="Hello|Test|env|tata|POLO|GER|GO|INSERT"
arraytest[1]="GOODNIGHT|Test2|env2|tete|GOLF|ITA|NOTGO|INSERT"
arraytest[2]="AFTER|Test3|env3|JAJA|CIT|FRA|GO|"
I try this
echo ${arraytest[*]} | sed -i 's/$/ $string'
Please Help.
Thank you.
bash solution:
string="INSERT"
pat="(${list_data//$'\n'/|})"
for i in "${!arraytest[#]}"; do
[[ "${arraytest[$i]}" =~ $pat ]] && arraytest[$i]+=$string
done
The final arraytest contents:
echo ${arraytest[*]}
Hello|Test|env|tata|POLO|GER|GO|INSERT GOODNIGHT|Test2|env2|tete|GOLF|ITA|NOTGO|INSERT AFTER|Test3|env3|JAJA|CIT|FRA|GO|
Details:
pat="(${list_data//$'\n'/|})" - constructing regex pattern(pat contains (toto|titi|tata|tete))
${!arraytest[#]} - the basic signature format is ${!name[*]} - if name is an array variable, expands to the list of array indices (keys) assigned in name.
Loop over the individual elements of the array (rather than ${array[*]} )
for ((i=0 ; i<${#arraytest[*]} ; i++))
do
echo ${arraytest[${i}]} | sed "s/$/|${string}/"
done
shell script to print three words differently I have tried
{
a="Uname/pass#last"
echo $a | tr "/" "\n" | tr "#" "\n"
output is:
Uname
pass
last
}
I want it as
{Username- Uname
Password- pass
lastname-last}
Ok, I guess you want to add a prefix to each results:
printf 'Username\nPassword\nlastname' > /tmp/prefixes
a="Uname/pass#last"
echo "${a}" | tr '/#' '\n\n' | paste -d':' /tmp/prefixes -
ie: paste together the output of /tmp/prefixes and of the Standard Input (-), which is receiving the output of : echo ".../...#..." | tr '/#' '\n\n'
(and in the resulting output, separate the 2 with a : in this example, or whatever else you would want. Ex: - like in your question.)
and it outputs :
Username:user
Password:pass
lastname:last
(I know you wanted a - instead of a : but I give my example with : to better separate the "-" denoting the standard input, and the ":" denoting the field-separator character in the output. Just change -d':' into -d'-' to have a - instead.)
First off, I hope you're not going to manipulate important passwords in a shell script and external commands. There are some risks involved with that.
Defining the problem
I suspect you want split a string encoding a user's Username, password and surname into a three line structure, adding tags to document which is which. For that, tr is insufficient.
However, it can be done inside the shell.
Example (bash, ksh):
function split_account_string {
typeset account=${1:?account string} uname pass last t
uname=${account%%/*}
last=${account##*#}
t=${account#$uname/}
pass=${t%#*}
[[ $uname/$pass#$last == "$account" ]] || return
echo "{Username-$uname"
echo "Password-$pass"
echo "lastname-$last}"
}
split_account_string "USER_A/seKreT#John.Doe"
This function will extract all tokens between the first / and the last # as the value of the password. If either one is missing, it will print nothing, and return an error status.
When run, this gives:
{Username-USER_A
Password-seKreT
lastname-John.Doe}
Use this simple script and get the output.
#!/bin/bash
a="Uname/pass#last"
array2=(`echo $a | tr "/" "\n" | tr "#" "\n"`)
array1=(`echo -e "Username\nPassword\nlastname"`)
i=${#array1[#]}
for (( j=0 ; j<$i ; j++ ))
do
echo "${array1[$j]}=${array2[$j]}"
done
I wish to provide a structured configuration file which is as easy as possible for a non-technical user to edit (unfortunately it has to be a file) and so I wanted to use YAML. I can't find any way of parsing this from a Unix shell script however.
Here is a bash-only parser that leverages sed and awk to parse simple yaml files:
function parse_yaml {
local prefix=$2
local s='[[:space:]]*' w='[a-zA-Z0-9_]*' fs=$(echo #|tr # '\034')
sed -ne "s|^\($s\):|\1|" \
-e "s|^\($s\)\($w\)$s:$s[\"']\(.*\)[\"']$s\$|\1$fs\2$fs\3|p" \
-e "s|^\($s\)\($w\)$s:$s\(.*\)$s\$|\1$fs\2$fs\3|p" $1 |
awk -F$fs '{
indent = length($1)/2;
vname[indent] = $2;
for (i in vname) {if (i > indent) {delete vname[i]}}
if (length($3) > 0) {
vn=""; for (i=0; i<indent; i++) {vn=(vn)(vname[i])("_")}
printf("%s%s%s=\"%s\"\n", "'$prefix'",vn, $2, $3);
}
}'
}
It understands files such as:
## global definitions
global:
debug: yes
verbose: no
debugging:
detailed: no
header: "debugging started"
## output
output:
file: "yes"
Which, when parsed using:
parse_yaml sample.yml
will output:
global_debug="yes"
global_verbose="no"
global_debugging_detailed="no"
global_debugging_header="debugging started"
output_file="yes"
it also understands yaml files, generated by ruby which may include ruby symbols, like:
---
:global:
:debug: 'yes'
:verbose: 'no'
:debugging:
:detailed: 'no'
:header: debugging started
:output: 'yes'
and will output the same as in the previous example.
typical use within a script is:
eval $(parse_yaml sample.yml)
parse_yaml accepts a prefix argument so that imported settings all have a common prefix (which will reduce the risk of namespace collisions).
parse_yaml sample.yml "CONF_"
yields:
CONF_global_debug="yes"
CONF_global_verbose="no"
CONF_global_debugging_detailed="no"
CONF_global_debugging_header="debugging started"
CONF_output_file="yes"
Note that previous settings in a file can be referred to by later settings:
## global definitions
global:
debug: yes
verbose: no
debugging:
detailed: no
header: "debugging started"
## output
output:
debug: $global_debug
Another nice usage is to first parse a defaults file and then the user settings, which works since the latter settings overrides the first ones:
eval $(parse_yaml defaults.yml)
eval $(parse_yaml project.yml)
I've written shyaml in python for YAML query needs from the shell command line.
Overview:
$ pip install shyaml ## installation
Example's YAML file (with complex features):
$ cat <<EOF > test.yaml
name: "MyName !!"
subvalue:
how-much: 1.1
things:
- first
- second
- third
other-things: [a, b, c]
maintainer: "Valentin Lab"
description: |
Multiline description:
Line 1
Line 2
EOF
Basic query:
$ cat test.yaml | shyaml get-value subvalue.maintainer
Valentin Lab
More complex looping query on complex values:
$ cat test.yaml | shyaml values-0 | \
while read -r -d $'\0' value; do
echo "RECEIVED: '$value'"
done
RECEIVED: '1.1'
RECEIVED: '- first
- second
- third'
RECEIVED: '2'
RECEIVED: 'Valentin Lab'
RECEIVED: 'Multiline description:
Line 1
Line 2'
A few key points:
all YAML types and syntax oddities are correctly handled, as multiline, quoted strings, inline sequences...
\0 padded output is available for solid multiline entry manipulation.
simple dotted notation to select sub-values (ie: subvalue.maintainer is a valid key).
access by index is provided to sequences (ie: subvalue.things.-1 is the last element of the subvalue.things sequence.)
access to all sequence/structs elements in one go for use in bash loops.
you can output whole subpart of a YAML file as ... YAML, which blend well for further manipulations with shyaml.
More sample and documentation are available on the shyaml github page or the shyaml PyPI page.
yq is a lightweight and portable command-line YAML processor
The aim of the project is to be the jq or sed of yaml files.
(https://github.com/mikefarah/yq#readme)
As an example (stolen straight from the documentation), given a sample.yaml file of:
---
bob:
item1:
cats: bananas
item2:
cats: apples
then
yq eval '.bob.*.cats' sample.yaml
will output
- bananas
- apples
My use case may or may not be quite the same as what this original post was asking, but it's definitely similar.
I need to pull in some YAML as bash variables. The YAML will never be more than one level deep.
YAML looks like so:
KEY: value
ANOTHER_KEY: another_value
OH_MY_SO_MANY_KEYS: yet_another_value
LAST_KEY: last_value
Output like-a dis:
KEY="value"
ANOTHER_KEY="another_value"
OH_MY_SO_MANY_KEYS="yet_another_value"
LAST_KEY="last_value"
I achieved the output with this line:
sed -e 's/:[^:\/\/]/="/g;s/$/"/g;s/ *=/=/g' file.yaml > file.sh
s/:[^:\/\/]/="/g finds : and replaces it with =", while ignoring :// (for URLs)
s/$/"/g appends " to the end of each line
s/ *=/=/g removes all spaces before =
Given that Python3 and PyYAML are quite easy dependencies to meet nowadays, the following may help:
yaml() {
python3 -c "import yaml;print(yaml.safe_load(open('$1'))$2)"
}
VALUE=$(yaml ~/my_yaml_file.yaml "['a_key']")
It's possible to pass a small script to some interpreters, like Python. An easy way to do so using Ruby and its YAML library is the following:
$ RUBY_SCRIPT="data = YAML::load(STDIN.read); puts data['a']; puts data['b']"
$ echo -e '---\na: 1234\nb: 4321' | ruby -ryaml -e "$RUBY_SCRIPT"
1234
4321
, wheredata is a hash (or array) with the values from yaml.
As a bonus, it'll parse Jekyll's front matter just fine.
ruby -ryaml -e "puts YAML::load(open(ARGV.first).read)['tags']" example.md
here an extended version of the Stefan Farestam's answer:
function parse_yaml {
local prefix=$2
local s='[[:space:]]*' w='[a-zA-Z0-9_]*' fs=$(echo #|tr # '\034')
sed -ne "s|,$s\]$s\$|]|" \
-e ":1;s|^\($s\)\($w\)$s:$s\[$s\(.*\)$s,$s\(.*\)$s\]|\1\2: [\3]\n\1 - \4|;t1" \
-e "s|^\($s\)\($w\)$s:$s\[$s\(.*\)$s\]|\1\2:\n\1 - \3|;p" $1 | \
sed -ne "s|,$s}$s\$|}|" \
-e ":1;s|^\($s\)-$s{$s\(.*\)$s,$s\($w\)$s:$s\(.*\)$s}|\1- {\2}\n\1 \3: \4|;t1" \
-e "s|^\($s\)-$s{$s\(.*\)$s}|\1-\n\1 \2|;p" | \
sed -ne "s|^\($s\):|\1|" \
-e "s|^\($s\)-$s[\"']\(.*\)[\"']$s\$|\1$fs$fs\2|p" \
-e "s|^\($s\)-$s\(.*\)$s\$|\1$fs$fs\2|p" \
-e "s|^\($s\)\($w\)$s:$s[\"']\(.*\)[\"']$s\$|\1$fs\2$fs\3|p" \
-e "s|^\($s\)\($w\)$s:$s\(.*\)$s\$|\1$fs\2$fs\3|p" | \
awk -F$fs '{
indent = length($1)/2;
vname[indent] = $2;
for (i in vname) {if (i > indent) {delete vname[i]; idx[i]=0}}
if(length($2)== 0){ vname[indent]= ++idx[indent] };
if (length($3) > 0) {
vn=""; for (i=0; i<indent; i++) { vn=(vn)(vname[i])("_")}
printf("%s%s%s=\"%s\"\n", "'$prefix'",vn, vname[indent], $3);
}
}'
}
This version supports the - notation and the short notation for dictionaries and lists. The following input:
global:
input:
- "main.c"
- "main.h"
flags: [ "-O3", "-fpic" ]
sample_input:
- { property1: value, property2: "value2" }
- { property1: "value3", property2: 'value 4' }
produces this output:
global_input_1="main.c"
global_input_2="main.h"
global_flags_1="-O3"
global_flags_2="-fpic"
global_sample_input_1_property1="value"
global_sample_input_1_property2="value2"
global_sample_input_2_property1="value3"
global_sample_input_2_property2="value 4"
as you can see the - items automatically get numbered in order to obtain different variable names for each item. In bash there are no multidimensional arrays, so this is one way to work around. Multiple levels are supported.
To work around the problem with trailing white spaces mentioned by #briceburg one should enclose the values in single or double quotes. However, there are still some limitations: Expansion of the dictionaries and lists can produce wrong results when values contain commas. Also, more complex structures like values spanning multiple lines (like ssh-keys) are not (yet) supported.
A few words about the code: The first sed command expands the short form of dictionaries { key: value, ...} to regular and converts them to more simple yaml style. The second sed call does the same for the short notation of lists and converts [ entry, ... ] to an itemized list with the - notation. The third sed call is the original one that handled normal dictionaries, now with the addition to handle lists with - and indentations. The awk part introduces an index for each indentation level and increases it when the variable name is empty (i.e. when processing a list). The current value of the counters are used instead of the empty vname. When going up one level, the counters are zeroed.
Edit: I have created a github repository for this.
Moving my answer from How to convert a json response into yaml in bash, since this seems to be the authoritative post on dealing with YAML text parsing from command line.
I would like to add details about the yq YAML implementation. Since there are two implementations of this YAML parser lying around, both having the name yq, it is hard to differentiate which one is in use, without looking at the implementations' DSL. There two available implementations are
kislyuk/yq - The more often talked about version, which is a wrapper over jq, written in Python using the PyYAML library for YAML parsing
mikefarah/yq - A Go implementation, with its own dynamic DSL using the go-yaml v3 parser.
Both are available for installation via standard installation package managers on almost all major distributions
kislyuk/yq - Installation instructions
mikefarah/yq - Installation instructions
Both the versions have some pros and cons over the other, but a few valid points to highlight (adopted from their repo instructions)
kislyuk/yq
Since the DSL is the adopted completely from jq, for users familiar with the latter, the parsing and manipulation becomes quite straightforward
Supports mode to preserve YAML tags and styles, but loses comments during the conversion. Since jq doesn't preserve comments, during the round-trip conversion, the comments are lost.
As part of the package, XML support is built in. An executable, xq, which transcodes XML to JSON using xmltodict and pipes it to jq, on which you can apply the same DSL to perform CRUD operations on the objects and round-trip the output back to XML.
Supports in-place edit mode with -i flag (similar to sed -i)
mikefarah/yq
Prone to frequent changes in DSL, migration from 2.x - 3.x
Rich support for anchors, styles and tags. But lookout for bugs once in a while
A relatively simple Path expression syntax to navigate and match yaml nodes
Supports YAML->JSON, JSON->YAML formatting and pretty printing YAML (with comments)
Supports in-place edit mode with -i flag (similar to sed -i)
Supports coloring the output YAML with -C flag (not applicable for JSON output) and indentation of the sub elements (default at 2 spaces)
Supports Shell completion for most shells - Bash, zsh (because of powerful support from spf13/cobra used to generate CLI flags)
My take on the following YAML (referenced in other answer as well) with both the versions
root_key1: this is value one
root_key2: "this is value two"
drink:
state: liquid
coffee:
best_served: hot
colour: brown
orange_juice:
best_served: cold
colour: orange
food:
state: solid
apple_pie:
best_served: warm
root_key_3: this is value three
Various actions to be performed with both the implementations (some frequently used operations)
Modifying node value at root level - Change value of root_key2
Modifying array contents, adding value - Add property to coffee
Modifying array contents, deleting value - Delete property from orange_juice
Printing key/value pairs with paths - For all items under food
Using kislyuk/yq
yq -y '.root_key2 |= "this is a new value"' yaml
yq -y '.drink.coffee += { time: "always"}' yaml
yq -y 'del(.drink.orange_juice.colour)' yaml
yq -r '.food|paths(scalars) as $p | [($p|join(".")), (getpath($p)|tojson)] | #tsv' yaml
Which is pretty straightforward. All you need is to transcode jq JSON output back into YAML with the -y flag.
Using mikefarah/yq
yq w yaml root_key2 "this is a new value"
yq w yaml drink.coffee.time "always"
yq d yaml drink.orange_juice.colour
yq r yaml --printMode pv "food.**"
As of today Dec 21st 2020, yq v4 is in beta and supports much powerful path expressions and supports DSL similar to using jq. Read the transition notes - Upgrading from V3
I just wrote a parser that I called Yay! (Yaml ain't Yamlesque!) which parses Yamlesque, a small subset of YAML. So, if you're looking for a 100% compliant YAML parser for Bash then this isn't it. However, to quote the OP, if you want a structured configuration file which is as easy as possible for a non-technical user to edit that is YAML-like, this may be of interest.
It's inspred by the earlier answer but writes associative arrays (yes, it requires Bash 4.x) instead of basic variables. It does so in a way that allows the data to be parsed without prior knowledge of the keys so that data-driven code can be written.
As well as the key/value array elements, each array has a keys array containing a list of key names, a children array containing names of child arrays and a parent key that refers to its parent.
This is an example of Yamlesque:
root_key1: this is value one
root_key2: "this is value two"
drink:
state: liquid
coffee:
best_served: hot
colour: brown
orange_juice:
best_served: cold
colour: orange
food:
state: solid
apple_pie:
best_served: warm
root_key_3: this is value three
Here is an example showing how to use it:
#!/bin/bash
# An example showing how to use Yay
. /usr/lib/yay
# helper to get array value at key
value() { eval echo \${$1[$2]}; }
# print a data collection
print_collection() {
for k in $(value $1 keys)
do
echo "$2$k = $(value $1 $k)"
done
for c in $(value $1 children)
do
echo -e "$2$c\n$2{"
print_collection $c " $2"
echo "$2}"
done
}
yay example
print_collection example
which outputs:
root_key1 = this is value one
root_key2 = this is value two
root_key_3 = this is value three
example_drink
{
state = liquid
example_coffee
{
best_served = hot
colour = brown
}
example_orange_juice
{
best_served = cold
colour = orange
}
}
example_food
{
state = solid
example_apple_pie
{
best_served = warm
}
}
And here is the parser:
yay_parse() {
# find input file
for f in "$1" "$1.yay" "$1.yml"
do
[[ -f "$f" ]] && input="$f" && break
done
[[ -z "$input" ]] && exit 1
# use given dataset prefix or imply from file name
[[ -n "$2" ]] && local prefix="$2" || {
local prefix=$(basename "$input"); prefix=${prefix%.*}
}
echo "declare -g -A $prefix;"
local s='[[:space:]]*' w='[a-zA-Z0-9_]*' fs=$(echo #|tr # '\034')
sed -n -e "s|^\($s\)\($w\)$s:$s\"\(.*\)\"$s\$|\1$fs\2$fs\3|p" \
-e "s|^\($s\)\($w\)$s:$s\(.*\)$s\$|\1$fs\2$fs\3|p" "$input" |
awk -F$fs '{
indent = length($1)/2;
key = $2;
value = $3;
# No prefix or parent for the top level (indent zero)
root_prefix = "'$prefix'_";
if (indent ==0 ) {
prefix = ""; parent_key = "'$prefix'";
} else {
prefix = root_prefix; parent_key = keys[indent-1];
}
keys[indent] = key;
# remove keys left behind if prior row was indented more than this row
for (i in keys) {if (i > indent) {delete keys[i]}}
if (length(value) > 0) {
# value
printf("%s%s[%s]=\"%s\";\n", prefix, parent_key , key, value);
printf("%s%s[keys]+=\" %s\";\n", prefix, parent_key , key);
} else {
# collection
printf("%s%s[children]+=\" %s%s\";\n", prefix, parent_key , root_prefix, key);
printf("declare -g -A %s%s;\n", root_prefix, key);
printf("%s%s[parent]=\"%s%s\";\n", root_prefix, key, prefix, parent_key);
}
}'
}
# helper to load yay data file
yay() { eval $(yay_parse "$#"); }
There is some documentation in the linked source file and below is a short explanation of what the code does.
The yay_parse function first locates the input file or exits with an exit status of 1. Next, it determines the dataset prefix, either explicitly specified or derived from the file name.
It writes valid bash commands to its standard output that, if executed, define arrays representing the contents of the input data file. The first of these defines the top-level array:
echo "declare -g -A $prefix;"
Note that array declarations are associative (-A) which is a feature of Bash version 4. Declarations are also global (-g) so they can be executed in a function but be available to the global scope like the yay helper:
yay() { eval $(yay_parse "$#"); }
The input data is initially processed with sed. It drops lines that don't match the Yamlesque format specification before delimiting the valid Yamlesque fields with an ASCII File Separator character and removing any double-quotes surrounding the value field.
local s='[[:space:]]*' w='[a-zA-Z0-9_]*' fs=$(echo #|tr # '\034')
sed -n -e "s|^\($s\)\($w\)$s:$s\"\(.*\)\"$s\$|\1$fs\2$fs\3|p" \
-e "s|^\($s\)\($w\)$s:$s\(.*\)$s\$|\1$fs\2$fs\3|p" "$input" |
The two expressions are similar; they differ only because the first one picks out quoted values where as the second one picks out unquoted ones.
The File Separator (28/hex 12/octal 034) is used because, as a non-printable character, it is unlikely to be in the input data.
The result is piped into awk which processes its input one line at a time. It uses the FS character to assign each field to a variable:
indent = length($1)/2;
key = $2;
value = $3;
All lines have an indent (possibly zero) and a key but they don't all have a value. It computes an indent level for the line dividing the length of the first field, which contains the leading whitespace, by two. The top level items without any indent are at indent level zero.
Next, it works out what prefix to use for the current item. This is what gets added to a key name to make an array name. There's a root_prefix for the top-level array which is defined as the data set name and an underscore:
root_prefix = "'$prefix'_";
if (indent ==0 ) {
prefix = ""; parent_key = "'$prefix'";
} else {
prefix = root_prefix; parent_key = keys[indent-1];
}
The parent_key is the key at the indent level above the current line's indent level and represents the collection that the current line is part of. The collection's key/value pairs will be stored in an array with its name defined as the concatenation of the prefix and parent_key.
For the top level (indent level zero) the data set prefix is used as the parent key so it has no prefix (it's set to ""). All other arrays are prefixed with the root prefix.
Next, the current key is inserted into an (awk-internal) array containing the keys. This array persists throughout the whole awk session and therefore contains keys inserted by prior lines. The key is inserted into the array using its indent as the array index.
keys[indent] = key;
Because this array contains keys from previous lines, any keys with an indent level grater than the current line's indent level are removed:
for (i in keys) {if (i > indent) {delete keys[i]}}
This leaves the keys array containing the key-chain from the root at indent level 0 to the current line. It removes stale keys that remain when the prior line was indented deeper than the current line.
The final section outputs the bash commands: an input line without a value starts a new indent level (a collection in YAML parlance) and an input line with a value adds a key to the current collection.
The collection's name is the concatenation of the current line's prefix and parent_key.
When a key has a value, a key with that value is assigned to the current collection like this:
printf("%s%s[%s]=\"%s\";\n", prefix, parent_key , key, value);
printf("%s%s[keys]+=\" %s\";\n", prefix, parent_key , key);
The first statement outputs the command to assign the value to an associative array element named after the key and the second one outputs the command to add the key to the collection's space-delimited keys list:
<current_collection>[<key>]="<value>";
<current_collection>[keys]+=" <key>";
When a key doesn't have a value, a new collection is started like this:
printf("%s%s[children]+=\" %s%s\";\n", prefix, parent_key , root_prefix, key);
printf("declare -g -A %s%s;\n", root_prefix, key);
The first statement outputs the command to add the new collection to the current's collection's space-delimited children list and the second one outputs the command to declare a new associative array for the new collection:
<current_collection>[children]+=" <new_collection>"
declare -g -A <new_collection>;
All of the output from yay_parse can be parsed as bash commands by the bash eval or source built-in commands.
Hard to say because it depends on what you want the parser to extract from your YAML document. For simple cases, you might be able to use grep, cut, awk etc. For more complex parsing you would need to use a full-blown parsing library such as Python's PyYAML or YAML::Perl.
Another option is to convert the YAML to JSON, then use jq to interact with the JSON representation either to extract information from it or edit it.
I wrote a simple bash script that contains this glue - see Y2J project on GitHub
perl -ne 'chomp; printf qq/%s="%s"\n/, split(/\s*:\s*/,$_,2)' file.yml > file.sh
I used to convert yaml to json using python and do my processing in jq.
python -c "import yaml; import json; from pathlib import Path; print(json.dumps(yaml.safe_load(Path('file.yml').read_text())))" | jq '.'
If you need a single value you could a tool which converts your YAML document to JSON and feed to jq, for example yq.
Content of sample.yaml:
---
bob:
item1:
cats: bananas
item2:
cats: apples
thing:
cats: oranges
Example:
$ yq -r '.bob["thing"]["cats"]' sample.yaml
oranges
I know this is very specific, but I think my answer could be helpful for certain users.
If you have node and npm installed on your machine, you can use js-yaml.
First install :
npm i -g js-yaml
# or locally
npm i js-yaml
then in your bash script
#!/bin/bash
js-yaml your-yaml-file.yml
Also if you are using jq you can do something like that
#!/bin/bash
json="$(js-yaml your-yaml-file.yml)"
aproperty="$(jq '.apropery' <<< "$json")"
echo "$aproperty"
Because js-yaml converts a yaml file to a json string literal. You can then use the string with any json parser in your unix system.
A quick way to do the thing now (previous ones haven't worked for me):
sudo wget https://github.com/mikefarah/yq/releases/download/v4.4.1/yq_linux_amd64 -O /usr/bin/yq &&\
sudo chmod +x /usr/bin/yq
Example asd.yaml:
a_list:
- key1: value1
key2: value2
key3: value3
parsing root:
user#vm:~$ yq e '.' asd.yaml
a_list:
- key1: value1
key2: value2
key3: value3
parsing key3:
user#vm:~$ yq e '.a_list[0].key3' asd.yaml
value3
If you have python 2 and PyYAML, you can use this parser I wrote called parse_yaml.py. Some of the neater things it does is let you choose a prefix (in case you have more than one file with similar variables) and to pick a single value from a yaml file.
For example if you have these yaml files:
staging.yaml:
db:
type: sqllite
host: 127.0.0.1
user: dev
password: password123
prod.yaml:
db:
type: postgres
host: 10.0.50.100
user: postgres
password: password123
You can load both without conflict.
$ eval $(python parse_yaml.py prod.yaml --prefix prod --cap)
$ eval $(python parse_yaml.py staging.yaml --prefix stg --cap)
$ echo $PROD_DB_HOST
10.0.50.100
$ echo $STG_DB_HOST
127.0.0.1
And even cherry pick the values you want.
$ prod_user=$(python parse_yaml.py prod.yaml --get db_user)
$ prod_port=$(python parse_yaml.py prod.yaml --get db_port --default 5432)
$ echo prod_user
postgres
$ echo prod_port
5432
You could use an equivalent of yq that is written in golang:
./go-yg -yamlFile /home/user/dev/ansible-firefox/defaults/main.yml -key
firefox_version
returns:
62.0.3
Whenever you need a solution for "How to work with YAML/JSON/compatible data from a shell script" which works on just about every OS with Python (*nix, OSX, Windows), consider yamlpath, which provides several command-line tools for reading, writing, searching, and merging YAML, EYAML, JSON, and compatible files. Since just about every OS either comes with Python pre-installed or it is trivial to install, this makes yamlpath highly portable. Even more interesting: this project defines an intuitive path language with very powerful, command-line-friendly syntax that enables accessing one or more nodes.
To your specific question and after installing yamlpath using Python's native package manager or your OS's package manager (yamlpath is available via RPM to some OSes):
#!/bin/bash
# Read values directly from YAML (or EYAML, JSON, etc) for use in this shell script:
myShellVar=$(yaml-get --query=any.path.no[matter%how].complex source-file.yaml)
# Use the value any way you need:
echo "Retrieved ${myShellVar}"
# Perhaps change the value and write it back:
myShellVar="New Value"
yaml-set --change=/any/path/no[matter%how]/complex --value="$myShellVar" source-file.yaml
You didn't specify that the data was a simple Scalar value though, so let's up the ante. What if the result you want is an Array? Even more challenging, what if it's an Array-of-Hashes and you only want one property of each result? Suppose further that your data is actually spread out across multiple YAML files and you need all the results in a single query. That's a much more interesting question to demonstrate with. So, suppose you have these two YAML files:
File: data1.yaml
---
baubles:
- name: Doohickey
sku: 0-000-1
price: 4.75
weight: 2.7g
- name: Doodad
sku: 0-000-2
price: 10.5
weight: 5g
- name: Oddball
sku: 0-000-3
price: 25.99
weight: 25kg
File: data2.yaml
---
baubles:
- name: Fob
sku: 0-000-4
price: 0.99
weight: 18mg
- name: Doohickey
price: 10.5
- name: Oddball
sku: 0-000-3
description: This ball is odd
How would you report only the sku of every item in inventory after applying the changes from data2.yaml to data1.yaml, all from a shell script? Try this:
#!/bin/bash
baubleSKUs=($(yaml-merge --aoh=deep data1.yaml data2.yaml | yaml-get --query=/baubles/sku -))
for sku in "${baubleSKUs[#]}"; do
echo "Found bauble SKU: ${sku}"
done
You get exactly what you need from only a few lines of code:
Found bauble SKU: 0-000-1
Found bauble SKU: 0-000-2
Found bauble SKU: 0-000-3
Found bauble SKU: 0-000-4
As you can see, yamlpath turns very complex problems into trivial solutions. Note that the entire query was handled as a stream; no YAML files were changed by the query and there were no temp files.
I realize this is "yet another tool to solve the same question" but after reading the other answers here, yamlpath appears more portable and robust than most alternatives. It also fully understands YAML/JSON/compatible files and it does not need to convert YAML to JSON to perform requested operations. As such, comments within the original YAML file are preserved whenever you need to change data in the source YAML file. Like some alternatives, yamlpath is also portable across OSes. More importantly, yamlpath defines a query language that is extremely powerful, enabling very specialized/filtered data queries. It can even operate against results from disparate parts of the file in a single query.
If you want to get or set many values in the data at once -- including complex data like hashes/arrays/maps/lists -- yamlpath can do that. Want a value but don't know precisely where it is in the document? yamlpath can find it and give you the exact path(s). Need to merge multiple data file together, including from STDIN? yamlpath does that, too. Further, yamlpath fully comprehends YAML anchors and their aliases, always giving or changing exactly the data you expect whether it is a concrete or referenced value.
Disclaimer: I wrote and maintain yamlpath, which is based on ruamel.yaml, which is in turn based on PyYAML. As such, yamlpath is fully standards-compliant.
Complex parsing is easiest with a library such as Python's PyYAML or YAML::Perl.
If you want to parse all the YAML values into bash values, try this script. This will handle comments as well. See example usage below:
# pparse.py
import yaml
import sys
def parse_yaml(yml, name=''):
if isinstance(yml, list):
for data in yml:
parse_yaml(data, name)
elif isinstance(yml, dict):
if (len(yml) == 1) and not isinstance(yml[list(yml.keys())[0]], list):
print(str(name+'_'+list(yml.keys())[0]+'='+str(yml[list(yml.keys())[0]]))[1:])
else:
for key in yml:
parse_yaml(yml[key], name+'_'+key)
if __name__=="__main__":
yml = yaml.safe_load(open(sys.argv[1]))
parse_yaml(yml)
test.yml
- folders:
- temp_folder: datasets/outputs/tmp
- keep_temp_folder: false
- MFA:
- MFA: false
- speaker_count: 1
- G2P:
- G2P: true
- G2P_model: models/MFA/G2P/english_g2p.zip
- input_folder: datasets/outputs/Youtube/ljspeech/wavs
- output_dictionary: datasets/outputs/Youtube/ljspeech/dictionary.dict
- dictionary: datasets/outputs/Youtube/ljspeech/dictionary.dict
- acoustic_model: models/MFA/acoustic/english.zip
- temp_folder: datasets/outputs/tmp
- jobs: 4
- align:
- config: configs/MFA/align.yaml
- dataset: datasets/outputs/Youtube/ljspeech/wavs
- output_folder: datasets/outputs/Youtube/ljspeech-aligned
- TTS:
- output_folder: datasets/outputs/Youtube
- preprocess:
- preprocess: true
- config: configs/TTS_preprocess.yaml # Default Config
- textgrid_folder: datasets/outputs/Youtube/ljspeech-aligned
- output_duration_folder: datasets/outputs/Youtube/durations
- sampling_rate: 44000 # Make sure sampling rate is same here as in preprocess config
Script where YAML values are needed:
yaml() {
eval $(python pparse.py "$1")
}
yaml "test.yml"
# What python printed to bash:
folders_temp_folder=datasets/outputs/tmp
folders_keep_temp_folder=False
MFA_MFA=False
MFA_speaker_count=1
MFA_G2P_G2P=True
MFA_G2P_G2P_model=models/MFA/G2P/english_g2p.zip
MFA_G2P_input_folder=datasets/outputs/Youtube/ljspeech/wavs
MFA_G2P_output_dictionary=datasets/outputs/Youtube/ljspeech/dictionary.dict
MFA_dictionary=datasets/outputs/Youtube/ljspeech/dictionary.dict
MFA_acoustic_model=models/MFA/acoustic/english.zip
MFA_temp_folder=datasets/outputs/tmp
MFA_jobs=4
MFA_align_config=configs/MFA/align.yaml
MFA_align_dataset=datasets/outputs/Youtube/ljspeech/wavs
MFA_align_output_folder=datasets/outputs/Youtube/ljspeech-aligned
TTS_output_folder=datasets/outputs/Youtube
TTS_preprocess_preprocess=True
TTS_preprocess_config=configs/TTS_preprocess.yaml
TTS_preprocess_textgrid_folder=datasets/outputs/Youtube/ljspeech-aligned
TTS_preprocess_output_duration_folder=datasets/outputs/Youtube/durations
TTS_preprocess_sampling_rate=44000
Access variables with bash:
echo "$TTS_preprocess_sampling_rate";
>>> 44000
If you know what tags you are interested in and the yaml structure you expect then it is not that hard to write a simple YAML parser in Bash.
In the following example the parser reads a structured YAML file into environment variables, an array and an associative array.
Note: The complexity of this parser is tied to the structure of the YAML file. You will need a separate subroutine for each structured component of the YAML file. Highly structured YAML files might require a more sophisticated approach, eg a generic recursive descent parser.
The xmas.yaml file:
# Xmas YAML example
---
# Values
pear-tree: partridge
turtle-doves: 2.718
french-hens: 3
# Array
calling-birds:
- huey
- dewey
- louie
- fred
# Structure
xmas-fifth-day:
calling-birds: four
french-hens: 3
golden-rings: 5
partridges:
count: 1
location: "a pear tree"
turtle-doves: two
The parser uses mapfile to read the file into memory as an array then cycles through each tag and creates environment variables.
pear-tree:, turtle-doves: and french-hens: end up as simple environment variables
calling-birds: becomes an array
The xmas-fifth-day: structure is represented as an associative array however you could encode these as environment variables if you are not using Bash 4.0 or later.
Comments and white space are ignored.
#!/bin/bash
# -------------------------------------------------------------------
# A simple parser for the xmas.yaml file
# -------------------------------------------------------------------
#
# xmas.yaml tags
# # - Ignored
# - Blank lines are ignored
# --- - Initialiser for days-of-xmas
# pear-tree: partridge - a string
# turtle-doves: 2.718 - a string, no float type in Bash
# french-hens: 3 - a number
# calling-birds: - an array of strings
# - huey - calling-birds[0]
# - dewey
# - louie
# - fred
# xmas-fifth-day: - an associative array
# calling-birds: four - a string
# french-hens: 3 - a number
# golden-rings: 5 - a number
# partridges: - changes the key to partridges.xxx
# count: 1 - a number
# location: "a pear tree" - a string
# turtle-doves: two - a string
#
# This requires the following routines
# ParseXMAS
# parses #, ---, blank line
# unexpected tag error
# calls days-of-xmas
#
# days-of-xmas
# parses pear-tree, turtle-doves, french-hens
# calls calling-birds
# calls xmas-fifth-day
#
# calling-birds
# elements of the array
#
# xmas-fifth-day
# parses calling-birds, french-hens, golden-rings, turtle-doves
# calls partridges
#
# partridges
# parses partridges.count, partridges.location
#
function ParseXMAS()
{
# days-of-xmas
# parses pear-tree, turtle-doves, french-hens
# calls calling-birds
# calls xmas-fifth-day
#
function days-of-xmas()
{
unset PearTree TurtleDoves FrenchHens
while [ $CURRENT_ROW -lt $ROWS ]
do
LINE=( ${CONFIG[${CURRENT_ROW}]} )
TAG=${LINE[0]}
unset LINE[0]
VALUE="${LINE[*]}"
echo " days-of-xmas[${CURRENT_ROW}] ${TAG}=${VALUE}"
if [ "$TAG" = "pear-tree:" ]
then
declare -g PearTree=$VALUE
elif [ "$TAG" = "turtle-doves:" ]
then
declare -g TurtleDoves=$VALUE
elif [ "$TAG" = "french-hens:" ]
then
declare -g FrenchHens=$VALUE
elif [ "$TAG" = "calling-birds:" ]
then
let CURRENT_ROW=$(($CURRENT_ROW + 1))
calling-birds
continue
elif [ "$TAG" = "xmas-fifth-day:" ]
then
let CURRENT_ROW=$(($CURRENT_ROW + 1))
xmas-fifth-day
continue
elif [ -z "$TAG" ] || [ "$TAG" = "#" ]
then
# Ignore comments and blank lines
true
else
# time to bug out
break
fi
let CURRENT_ROW=$(($CURRENT_ROW + 1))
done
}
# calling-birds
# elements of the array
function calling-birds()
{
unset CallingBirds
declare -ag CallingBirds
while [ $CURRENT_ROW -lt $ROWS ]
do
LINE=( ${CONFIG[${CURRENT_ROW}]} )
TAG=${LINE[0]}
unset LINE[0]
VALUE="${LINE[*]}"
echo " calling-birds[${CURRENT_ROW}] ${TAG}=${VALUE}"
if [ "$TAG" = "-" ]
then
CallingBirds[${#CallingBirds[*]}]=$VALUE
elif [ -z "$TAG" ] || [ "$TAG" = "#" ]
then
# Ignore comments and blank lines
true
else
# time to bug out
break
fi
let CURRENT_ROW=$(($CURRENT_ROW + 1))
done
}
# xmas-fifth-day
# parses calling-birds, french-hens, golden-rings, turtle-doves
# calls fifth-day-partridges
#
function xmas-fifth-day()
{
unset XmasFifthDay
declare -Ag XmasFifthDay
while [ $CURRENT_ROW -lt $ROWS ]
do
LINE=( ${CONFIG[${CURRENT_ROW}]} )
TAG=${LINE[0]}
unset LINE[0]
VALUE="${LINE[*]}"
echo " xmas-fifth-day[${CURRENT_ROW}] ${TAG}=${VALUE}"
if [ "$TAG" = "calling-birds:" ]
then
XmasFifthDay[CallingBirds]=$VALUE
elif [ "$TAG" = "french-hens:" ]
then
XmasFifthDay[FrenchHens]=$VALUE
elif [ "$TAG" = "golden-rings:" ]
then
XmasFifthDay[GOLDEN-RINGS]=$VALUE
elif [ "$TAG" = "turtle-doves:" ]
then
XmasFifthDay[TurtleDoves]=$VALUE
elif [ "$TAG" = "partridges:" ]
then
let CURRENT_ROW=$(($CURRENT_ROW + 1))
partridges
continue
elif [ -z "$TAG" ] || [ "$TAG" = "#" ]
then
# Ignore comments and blank lines
true
else
# time to bug out
break
fi
let CURRENT_ROW=$(($CURRENT_ROW + 1))
done
}
function partridges()
{
while [ $CURRENT_ROW -lt $ROWS ]
do
LINE=( ${CONFIG[${CURRENT_ROW}]} )
TAG=${LINE[0]}
unset LINE[0]
VALUE="${LINE[*]}"
echo " partridges[${CURRENT_ROW}] ${TAG}=${VALUE}"
if [ "$TAG" = "count:" ]
then
XmasFifthDay[PARTRIDGES.COUNT]=$VALUE
elif [ "$TAG" = "location:" ]
then
XmasFifthDay[PARTRIDGES.LOCATION]=$VALUE
elif [ -z "$TAG" ] || [ "$TAG" = "#" ]
then
# Ignore comments and blank lines
true
else
# time to bug out
break
fi
let CURRENT_ROW=$(($CURRENT_ROW + 1))
done
}
# ===================================================================
# Load the configuration file
mapfile CONFIG < xmas.yaml
let ROWS=${#CONFIG[#]}
let CURRENT_ROW=0
# +
# #
#
# ---
# -
while [ $CURRENT_ROW -lt $ROWS ]
do
LINE=( ${CONFIG[${CURRENT_ROW}]} )
TAG=${LINE[0]}
unset LINE[0]
VALUE="${LINE[*]}"
echo "[${CURRENT_ROW}] ${TAG}=${VALUE}"
if [ "$TAG" = "---" ]
then
let CURRENT_ROW=$(($CURRENT_ROW + 1))
days-of-xmas
continue
elif [ -z "$TAG" ] || [ "$TAG" = "#" ]
then
# Ignore comments and blank lines
true
else
echo "Unexpected tag at line $(($CURRENT_ROW + 1)): <${TAG}>={${VALUE}}"
break
fi
let CURRENT_ROW=$(($CURRENT_ROW + 1))
done
}
echo =========================================
ParseXMAS
echo =========================================
declare -p PearTree
declare -p TurtleDoves
declare -p FrenchHens
declare -p CallingBirds
declare -p XmasFifthDay
This produces the following output
=========================================
[0] #=Xmas YAML example
[1] ---=
days-of-xmas[2] #=Values
days-of-xmas[3] pear-tree:=partridge
days-of-xmas[4] turtle-doves:=2.718
days-of-xmas[5] french-hens:=3
days-of-xmas[6] =
days-of-xmas[7] #=Array
days-of-xmas[8] calling-birds:=
calling-birds[9] -=huey
calling-birds[10] -=dewey
calling-birds[11] -=louie
calling-birds[12] -=fred
calling-birds[13] =
calling-birds[14] #=Structure
calling-birds[15] xmas-fifth-day:=
days-of-xmas[15] xmas-fifth-day:=
xmas-fifth-day[16] calling-birds:=four
xmas-fifth-day[17] french-hens:=3
xmas-fifth-day[18] golden-rings:=5
xmas-fifth-day[19] partridges:=
partridges[20] count:=1
partridges[21] location:="a pear tree"
partridges[22] turtle-doves:=two
xmas-fifth-day[22] turtle-doves:=two
=========================================
declare -- PearTree="partridge"
declare -- TurtleDoves="2.718"
declare -- FrenchHens="3"
declare -a CallingBirds=([0]="huey" [1]="dewey" [2]="louie" [3]="fred")
declare -A XmasFifthDay=([CallingBirds]="four" [PARTRIDGES.LOCATION]="\"a pear tree\"" [FrenchHens]="3" [GOLDEN-RINGS]="5" [PARTRIDGES.COUNT]="1" [TurtleDoves]="two" )
You can also consider using Grunt (The JavaScript Task Runner). Can be easily integrated with shell. It supports reading YAML (grunt.file.readYAML) and JSON (grunt.file.readJSON) files.
This can be achieved by creating a task in Gruntfile.js (or Gruntfile.coffee), e.g.:
module.exports = function (grunt) {
grunt.registerTask('foo', ['load_yml']);
grunt.registerTask('load_yml', function () {
var data = grunt.file.readYAML('foo.yml');
Object.keys(data).forEach(function (g) {
// ... switch (g) { case 'my_key':
});
});
};
then from shell just simply run grunt foo (check grunt --help for available tasks).
Further more you can implement exec:foo tasks (grunt-exec) with input variables passed from your task (foo: { cmd: 'echo bar <%= foo %>' }) in order to print the output in whatever format you want, then pipe it into another command.
There is also similar tool to Grunt, it's called gulp with additional plugin gulp-yaml.
Install via: npm install --save-dev gulp-yaml
Sample usage:
var yaml = require('gulp-yaml');
gulp.src('./src/*.yml')
.pipe(yaml())
.pipe(gulp.dest('./dist/'))
gulp.src('./src/*.yml')
.pipe(yaml({ space: 2 }))
.pipe(gulp.dest('./dist/'))
gulp.src('./src/*.yml')
.pipe(yaml({ safe: true }))
.pipe(gulp.dest('./dist/'))
To more options to deal with YAML format, check YAML site for available projects, libraries and other resources which can help you to parse that format.
Other tools:
Jshon
parses, reads and creates JSON
I know my answer is specific, but if one already has PHP and Symfony installed, it can be very handy to use Symfony's YAML parser.
For instance:
php -r "require '$SYMFONY_ROOT_PATH/vendor/autoload.php'; \
var_dump(\Symfony\Component\Yaml\Yaml::parse(file_get_contents('$YAML_FILE_PATH')));"
Here I simply used var_dump to output the parsed array but of course you can do much more... :)