Reading strings from JSON using ksh - shell

{
"assignedTo": [
"Daniel Raisor",
"Dalton Leslie",
"Logan Petro"
]
}
I want to extract Daniel Raisor Dalton Leslie and Logan Petro and assign these to a separate variable like assignedTo=Daniel Raisor,Dalton Leslie,Logan Petro
also my unix doesn't support jq ,I need to do this using grep or sed command

Use a proper JSON parser to parse JSON. Even if you don't have jq, there's a perfectly good one included with the Python interpreter, which the following example script uses:
#!/bin/ksh
# note: this was tested with ksh93u+ 2012-08-01
# to use with Python 3 rather than Python 2, replace "pipes" with "shlex"
json='{"assignedTo": ["Daniel \"The Man\" Raisor","Dalton Leslie","Logan Petro"]}'
getEntries() {
python -c '
import json, sys, pipes
content = json.load(sys.stdin)
for name, values in content.iteritems():
print("%s=%s" % (pipes.quote(name), pipes.quote(",".join(values))))
'
}
eval "$(getEntries <<<"$json")"
echo "$assignedTo"
...properly emits the following output:
Daniel "The Man" Raisor,Dalton Leslie,Logan Petro

Related

Bash: decode string with url escaped hex codes [duplicate]

I'm looking for a way to turn this:
hello < world
to this:
hello < world
I could use sed, but how can this be accomplished without using cryptic regex?
Try recode (archived page; GitHub mirror; Debian page):
$ echo '<' |recode html..ascii
<
Install on Linux and similar Unix-y systems:
$ sudo apt-get install recode
Install on Mac OS using:
$ brew install recode
With perl:
cat foo.html | perl -MHTML::Entities -pe 'decode_entities($_);'
With php from the command line:
cat foo.html | php -r 'while(($line=fgets(STDIN)) !== FALSE) echo html_entity_decode($line, ENT_QUOTES|ENT_HTML401);'
An alternative is to pipe through a web browser -- such as:
echo '!' | w3m -dump -T text/html
This worked great for me in cygwin, where downloading and installing distributions are difficult.
This answer was found here
Using xmlstarlet:
echo 'hello < world' | xmlstarlet unesc
A python 3.2+ version:
cat foo.html | python3 -c 'import html, sys; [print(html.unescape(l), end="") for l in sys.stdin]'
This answer is based on: Short way to escape HTML in Bash? which works fine for grabbing answers (using wget) on Stack Exchange and converting HTML to regular ASCII characters:
sed 's/ / /g; s/&/\&/g; s/</\</g; s/>/\>/g; s/"/\"/g; s/#'/\'"'"'/g; s/“/\"/g; s/”/\"/g;'
Edit 1: April 7, 2017 - Added left double quote and right double quote conversion. This is part of bash script that web-scrapes SE answers and compares them to local code files here: Ask Ubuntu -
Code Version Control between local files and Ask Ubuntu answers
Edit June 26, 2017
Using sed was taking ~3 seconds to convert HTML to ASCII on a 1K line file from Ask Ubuntu / Stack Exchange. As such I was forced to use Bash built-in search and replace for ~1 second response time.
Here's the function:
LineOut="" # Make global
HTMLtoText () {
LineOut=$1 # Parm 1= Input line
# Replace external command: Line=$(sed 's/&/\&/g; s/</\</g;
# s/>/\>/g; s/"/\"/g; s/'/\'"'"'/g; s/“/\"/g;
# s/”/\"/g;' <<< "$Line") -- With faster builtin commands.
LineOut="${LineOut// / }"
LineOut="${LineOut//&/&}"
LineOut="${LineOut//</<}"
LineOut="${LineOut//>/>}"
LineOut="${LineOut//"/'"'}"
LineOut="${LineOut//'/"'"}"
LineOut="${LineOut//“/'"'}" # TODO: ASCII/ISO for opening quote
LineOut="${LineOut//”/'"'}" # TODO: ASCII/ISO for closing quote
} # HTMLtoText ()
On macOS, you can use the built-in command textutil (which is a handy utility in general):
echo '👋 hello < world 🌐' | textutil -convert txt -format html -stdin -stdout
outputs:
👋 hello < world 🌐
To support the unescaping of all HTML entities only with sed substitutions would require too long a list of commands to be practical, because every Unicode code point has at least two corresponding HTML entities.
But it can be done using only sed, grep, the Bourne shell and basic UNIX utilities (the GNU coreutils or equivalent):
#!/bin/sh
htmlEscDec2Hex() {
file=$1
[ ! -r "$file" ] && file=$(mktemp) && cat >"$file"
printf -- \
"$(sed 's/\\/\\\\/g;s/%/%%/g;s/&#[0-9]\{1,10\};/\&#x%x;/g' "$file")\n" \
$(grep -o '&#[0-9]\{1,10\};' "$file" | tr -d '&#;')
[ x"$1" != x"$file" ] && rm -f -- "$file"
}
htmlHexUnescape() {
printf -- "$(
sed 's/\\/\\\\/g;s/%/%%/g
;s/&#x\([0-9a-fA-F]\{1,8\}\);/\&#x0000000\1;/g
;s/&#x0*\([0-9a-fA-F]\{4\}\);/\\u\1/g
;s/&#x0*\([0-9a-fA-F]\{8\}\);/\\U\1/g' )\n"
}
htmlEscDec2Hex "$1" | htmlHexUnescape \
| sed -f named_entities.sed
Note, however, that a printf implementation supporting \uHHHH and \UHHHHHHHH sequences is required, such as the GNU utility’s. To test, check for example that printf "\u00A7\n" prints §. To call the utility instead of the shell built-in, replace the occurrences of printf with env printf.
This script uses an additional file, named_entities.sed, in order to support the named entities. It can be generated from the specification using the following HTML page:
<!DOCTYPE html>
<head><meta charset="utf-8" /></head>
<body>
<p id="sed-script"></p>
<script type="text/javascript">
const referenceURL = 'https://html.spec.whatwg.org/entities.json';
function writeln(element, text) {
element.appendChild( document.createTextNode(text) );
element.appendChild( document.createElement("br") );
}
(async function(container) {
const json = await (await fetch(referenceURL)).json();
container.innerHTML = "";
writeln(container, "#!/usr/bin/sed -f");
const addLast = [];
for (const name in json) {
const characters = json[name].characters
.replace("\\", "\\\\")
.replace("/", "\\/");
const command = "s/" + name + "/" + characters + "/g";
if ( name.endsWith(";") ) {
writeln(container, command);
} else {
addLast.push(command);
}
}
for (const command of addLast) { writeln(container, command); }
})( document.getElementById("sed-script") );
</script>
</body></html>
Simply open it in a modern browser, and save the resulting page as text as named_entities.sed. This sed script can also be used alone if only named entities are required; in this case it is convenient to give it executable permission so that it can be called directly.
Now the above shell script can be used as ./html_unescape.sh foo.html, or inside a pipeline reading from standard input.
For example, if for some reason it is needed to process the data by chunks (it might be the case if printf is not a shell built-in and the data to process is large), one could use it as:
nLines=20
seq 1 $nLines $(grep -c $ "$inputFile") | while read n
do sed -n "$n,$((n+nLines-1))p" "$inputFile" | ./html_unescape.sh
done
Explanation of the script follows.
There are three types of escape sequences that need to be supported:
&#D; where D is the decimal value of the escaped character’s Unicode code point;
&#xH; where H is the hexadecimal value of the escaped character’s Unicode code point;
&N; where N is the name of one of the named entities for the escaped character.
The &N; escapes are supported by the generated named_entities.sed script which simply performs the list of substitutions.
The central piece of this method for supporting the code point escapes is the printf utility, which is able to:
print numbers in hexadecimal format, and
print characters from their code point’s hexadecimal value (using the escapes \uHHHH or \UHHHHHHHH).
The first feature, with some help from sed and grep, is used to reduce the &#D; escapes into &#xH; escapes. The shell function htmlEscDec2Hex does that.
The function htmlHexUnescape uses sed to transform the &#xH; escapes into printf’s \u/\U escapes, then uses the second feature to print the unescaped characters.
I like the Perl answer given in https://stackoverflow.com/a/13161719/1506477.
cat foo.html | perl -MHTML::Entities -pe 'decode_entities($_);'
But, it produced an unequal number of lines on plain text files. (and I dont know perl enough to debug it.)
I like the python answer given in https://stackoverflow.com/a/42672936/1506477 --
python3 -c 'import html, sys; [print(html.unescape(l), end="") for l in sys.stdin]'
but it creates a list [ ... for l in sys.stdin] in memory, that is forbidden for large files.
Here is another easy pythonic way without buffering in memory: using awkg.
$ echo 'hello < : " world' | \
awkg -b 'from html import unescape' 'print(unescape(R0))'
hello < : " world
awkg is a python based awk-like line processor. You may install it using pip https://pypi.org/project/awkg/:
pip install awkg
-b is awk's BEGIN{} block that runs once in the beginning.
Here we just did from html import unescape.
Each line record is in R0 variable, for which we did
print(unescape(R0))
Disclaimer:
I am the maintainer of awkg
I have created a sed script based on the list of entities so it must handle most of the entities.
sed -f htmlentities.sed < file.html
My original answer got some comments, that recode does not work for UTF-8 encoded HTML files. This is correct. recode supports only HTML 4. The encoding HTML is an alias for HTML_4.0:
$ recode -l | grep -iw html
HTML-i18n 2070 RFC2070
HTML_4.0 h h4 HTML
The default encoding for HTML 4 is Latin-1. This has changed in HTML 5. The default encoding for HTML 5 is UTF-8. This is the reason, why recode does not work for HTML 5 files.
HTML 5 defines the list of entities here:
https://html.spec.whatwg.org/multipage/named-characters.html
The definition includes a machine readable specification in JSON format:
https://html.spec.whatwg.org/entities.json
The JSON file can be used to perform a simple text replacement. The following example is a self modifying Perl script, which caches the JSON specification in its DATA chunk.
Note: For some obscure compatibility reasons, the specification allows entities without a terminating semicolon. Because of that the entities are sorted by length in reverse order to make sure, that the correct entities are replaced first so that they do not get destroyed by entities without the ending semicolon.
#! /usr/bin/perl
use utf8;
use strict;
use warnings;
use open qw(:std :utf8);
use LWP::Simple;
use JSON::Parse qw(parse_json);
my $entities;
INIT {
if (eof DATA) {
my $data = tell DATA;
open DATA, '+<', $0;
seek DATA, $data, 0;
my $entities_json = get 'https://html.spec.whatwg.org/entities.json';
print DATA $entities_json;
truncate DATA, tell DATA;
close DATA;
$entities = parse_json ($entities_json);
} else {
local $/ = undef;
$entities = parse_json (<DATA>);
}
}
local $/ = undef;
my $html = <>;
for my $entity (sort { length $b <=> length $a } keys %$entities) {
my $characters = $entities->{$entity}->{characters};
$html =~ s/$entity/$characters/g;
}
print $html;
__DATA__
Example usage:
$ echo '😊 & ٱلْعَرَبِيَّة' | ./html5-to-utf8.pl
😊 & ٱلْعَرَبِيَّة
With Xidel:
echo 'hello < : " world' | xidel -s - -e 'parse-html($raw)'
hello < : " world

How can I get content of values under key images in YAML file from the shell

YAML file like this:
http:
Domain: {{ environment.domains.httpport }}
images:
emas_fe_weex: 20170810-ed0b13f
eweex_basic_manager: 20150109-e0fafa3
replicaCount:
xxxx: 1
resources:
{}
How can I get the following with shell?
emas_fe_weex: 20170810-ed0b13f
eweex_basic_manager: 20150109-e0fafa3
It is best to process YAML with a YAML parser e.g. with Python and ruamel.yaml (disclaimer: I am the author of that package). With the input in input.yaml:
< input.yaml python -c "import sys, ruamel.yaml; yaml=ruamel.yaml.YAML(); yaml.dump(yaml.load(sys.stdin)['http']['images'], sys.stdout)"
will output:
emas_fe_weex: 20170810-ed0b13f
eweex_basic_manager: 20150109-e0fafa3
I agree with Anthon: YAML is sufficiently complicated to require use of a YAML parser (like XML, JSON, CSV, etc)
Here are a few examples with other scripting languages, depending on your taste:
Ruby
ruby -ryaml -e '
data = YAML.load($stdin)
puts YAML.dump(data["http"]["images"])
' < file.yaml
Perl (requires YAML::XS from CPAN)
perl -MYAML::XS -0777 -nE '
$data = Load($_);
say Dump($data->{http}{images})
' < file.yaml
Tcl (requires tcllib)
echo '
package require yaml
set data [yaml::yaml2dict -file "file.yaml"]
puts [yaml::dict2yaml [dict get $data http images]]
' | tclsh
If you are using a system where grep is available, you can get them both with it. Assuming the data is in a file called http.yaml:
grep -e emas_fe_weex -e eweex_basic_manager http.yaml

converting lines to json in bash

I would like to convert a list into JSON array. I'm looking at jq for this but the examples are mostly about parsing JSON (not creating it). It would be nice to know proper escaping will occur. My list is single line elements so the new line will probably be the best delimiter.
I was also trying to convert a bunch of lines into a JSON array, and was at a standstill until I realized that -s was the only way I could handle more than one line at a time in the jq expression, even if that meant I'd have to parse the newlines manually.
jq -R -s -c 'split("\n")' < just_lines.txt
-R to read raw input
-s to read all input as a single string
-c to not pretty print the output
Easy peasy.
Edit: I'm on jq ≥ 1.4, which is apparently when the split built-in was introduced.
--raw-input, then --slurp
Just summarizing what the others have said in a hopefully quicker to understand form:
cat /etc/hosts | jq --raw-input . | jq --slurp .
will return you:
[
"fe00::0 ip6-localnet",
"ff00::0 ip6-mcastprefix",
"ff02::1 ip6-allnodes",
"ff02::2 ip6-allrouters"
]
Explanation
--raw-input/-R:
Don´t parse the input as JSON. Instead, each line of text is passed
to the filter as a string. If combined with --slurp, then the
entire input is passed to the filter as a single long string.
--slurp/-s:
Instead of running the filter for each JSON object in the input,
read the entire input stream into a large array and run the filter
just once.
You can also use jq -R . to format each line as a JSON string and then jq -s (--slurp) to create an array for the input lines after parsing them as JSON:
$ printf %s\\n aa bb|jq -R .|jq -s .
[
"aa",
"bb"
]
The method in chbrown's answer adds an empty element to the end if the input ends with a linefeed, but you can use printf %s "$(cat)" to remove trailing linefeeds:
$ printf %s\\n aa bb|jq -R -s 'split("\n")'
[
"aa",
"bb",
""
]
$ printf %s\\n aa bb|printf %s "$(cat)"|jq -R -s 'split("\n")'
[
"aa",
"bb"
]
If the input lines don't contain ASCII control characters (which have to be escaped in strings in valid JSON), you can use sed:
$ printf %s\\n aa bb|sed 's/["\]/\\&/g;s/.*/"&"/;1s/^/[/;$s/$/]/;$!s/$/,/'
["aa",
"bb"]
Update: If your jq has inputs you can simply write:
jq -nR [inputs] /etc/hosts
to produce a JSON array of strings. This avoids having to read the text file as a whole.
I found in the man page for jq and through experimentation what seems to me to be a simpler answer.
$ cat test_file.txt | jq -Rsc '. / "\n" - [""]'
["aa","bb"]
The -R is to read without trying to parse json, the -s says to read all of the input as one string, and the -c is for one-line output - not necessary, but it's what I was looking for.
Then in the string I pass to jq, the '.' says take the input as it is. The '/ \n' says to divide the string (split it) on newlines. The '- [""]' says to remove from the resulting array any empty strings (resulting from an extra newline at the end).
It's one line and without any complicated constructs, using just simple built in jq features.

Parsing Json data columnwise in shell

When I run a command I get a response like this
{
"status": "available",
"managed": true,
"name":vdisk7,
"support":{
"status": "supported"
},
"storage_pool": "pfm9253_pfm9254_new",
"id": "ff10abad"-2bf-4ef3-9038-9ae7f18ea77c",
"size":100
},
and hundreds of this type of lists or dictionaries
I want a command that does such sort of a thing
if name = "something",
get the id
Any links that would help me in learning such sort of commands would be highly appreciated
I have tried
awk '{if ($2 == "something") print $0;}'
But I think the response is in Json so the colum wise awk formatting is not working.
Also it's just a single command that I need to run so I would prefer not to use any external library.
JSON parser is better for this task
awk and sed are utilities to parse line-oriented text, but not json. What if your json formatting will change ? (some lines will go on one line ?).
You should use any standard json parser out there. Or use some powerful scripting language, such as PHP, Python, Ruby, etc.
I can provide you with example on how to do it with python.
What if I can't use powerful scripting language ?
If you totally unable to use python, then there is utility jq out there: link
If you have some recent distro, jq maybe already in repositories (example: Ubuntu 13.10 has it in repos).
I can use python!
I would do that using simple python inline script.
For example we have some some_command that returns json as a result.
We have to get value of data["name"].
Here we go:
some_command | python -c "import json, sys; print json.load(sys.stdin)['name']"
It will output vdisk7 in your case
For this to work you need to be sure, json is fully valid.
If you have a list of json objects:
[
{
...
"name": "vdisk17"
...
},
{
...
"name": "vdisk18"
...
},
{
...
"name": "vdisk19"
...
},
...
]
You could use some list comprehensions:
some_command | python -c "import json, sys; [sys.stdout.write(x['name'] + '\n') for x in json.load(sys.stdin)]"
It will output:
vdisk17
vdisk18
vdisk19

How can I parse a YAML file from a Linux shell script?

I wish to provide a structured configuration file which is as easy as possible for a non-technical user to edit (unfortunately it has to be a file) and so I wanted to use YAML. I can't find any way of parsing this from a Unix shell script however.
Here is a bash-only parser that leverages sed and awk to parse simple yaml files:
function parse_yaml {
local prefix=$2
local s='[[:space:]]*' w='[a-zA-Z0-9_]*' fs=$(echo #|tr # '\034')
sed -ne "s|^\($s\):|\1|" \
-e "s|^\($s\)\($w\)$s:$s[\"']\(.*\)[\"']$s\$|\1$fs\2$fs\3|p" \
-e "s|^\($s\)\($w\)$s:$s\(.*\)$s\$|\1$fs\2$fs\3|p" $1 |
awk -F$fs '{
indent = length($1)/2;
vname[indent] = $2;
for (i in vname) {if (i > indent) {delete vname[i]}}
if (length($3) > 0) {
vn=""; for (i=0; i<indent; i++) {vn=(vn)(vname[i])("_")}
printf("%s%s%s=\"%s\"\n", "'$prefix'",vn, $2, $3);
}
}'
}
It understands files such as:
## global definitions
global:
debug: yes
verbose: no
debugging:
detailed: no
header: "debugging started"
## output
output:
file: "yes"
Which, when parsed using:
parse_yaml sample.yml
will output:
global_debug="yes"
global_verbose="no"
global_debugging_detailed="no"
global_debugging_header="debugging started"
output_file="yes"
it also understands yaml files, generated by ruby which may include ruby symbols, like:
---
:global:
:debug: 'yes'
:verbose: 'no'
:debugging:
:detailed: 'no'
:header: debugging started
:output: 'yes'
and will output the same as in the previous example.
typical use within a script is:
eval $(parse_yaml sample.yml)
parse_yaml accepts a prefix argument so that imported settings all have a common prefix (which will reduce the risk of namespace collisions).
parse_yaml sample.yml "CONF_"
yields:
CONF_global_debug="yes"
CONF_global_verbose="no"
CONF_global_debugging_detailed="no"
CONF_global_debugging_header="debugging started"
CONF_output_file="yes"
Note that previous settings in a file can be referred to by later settings:
## global definitions
global:
debug: yes
verbose: no
debugging:
detailed: no
header: "debugging started"
## output
output:
debug: $global_debug
Another nice usage is to first parse a defaults file and then the user settings, which works since the latter settings overrides the first ones:
eval $(parse_yaml defaults.yml)
eval $(parse_yaml project.yml)
I've written shyaml in python for YAML query needs from the shell command line.
Overview:
$ pip install shyaml ## installation
Example's YAML file (with complex features):
$ cat <<EOF > test.yaml
name: "MyName !!"
subvalue:
how-much: 1.1
things:
- first
- second
- third
other-things: [a, b, c]
maintainer: "Valentin Lab"
description: |
Multiline description:
Line 1
Line 2
EOF
Basic query:
$ cat test.yaml | shyaml get-value subvalue.maintainer
Valentin Lab
More complex looping query on complex values:
$ cat test.yaml | shyaml values-0 | \
while read -r -d $'\0' value; do
echo "RECEIVED: '$value'"
done
RECEIVED: '1.1'
RECEIVED: '- first
- second
- third'
RECEIVED: '2'
RECEIVED: 'Valentin Lab'
RECEIVED: 'Multiline description:
Line 1
Line 2'
A few key points:
all YAML types and syntax oddities are correctly handled, as multiline, quoted strings, inline sequences...
\0 padded output is available for solid multiline entry manipulation.
simple dotted notation to select sub-values (ie: subvalue.maintainer is a valid key).
access by index is provided to sequences (ie: subvalue.things.-1 is the last element of the subvalue.things sequence.)
access to all sequence/structs elements in one go for use in bash loops.
you can output whole subpart of a YAML file as ... YAML, which blend well for further manipulations with shyaml.
More sample and documentation are available on the shyaml github page or the shyaml PyPI page.
yq is a lightweight and portable command-line YAML processor
The aim of the project is to be the jq or sed of yaml files.
(https://github.com/mikefarah/yq#readme)
As an example (stolen straight from the documentation), given a sample.yaml file of:
---
bob:
item1:
cats: bananas
item2:
cats: apples
then
yq eval '.bob.*.cats' sample.yaml
will output
- bananas
- apples
My use case may or may not be quite the same as what this original post was asking, but it's definitely similar.
I need to pull in some YAML as bash variables. The YAML will never be more than one level deep.
YAML looks like so:
KEY: value
ANOTHER_KEY: another_value
OH_MY_SO_MANY_KEYS: yet_another_value
LAST_KEY: last_value
Output like-a dis:
KEY="value"
ANOTHER_KEY="another_value"
OH_MY_SO_MANY_KEYS="yet_another_value"
LAST_KEY="last_value"
I achieved the output with this line:
sed -e 's/:[^:\/\/]/="/g;s/$/"/g;s/ *=/=/g' file.yaml > file.sh
s/:[^:\/\/]/="/g finds : and replaces it with =", while ignoring :// (for URLs)
s/$/"/g appends " to the end of each line
s/ *=/=/g removes all spaces before =
Given that Python3 and PyYAML are quite easy dependencies to meet nowadays, the following may help:
yaml() {
python3 -c "import yaml;print(yaml.safe_load(open('$1'))$2)"
}
VALUE=$(yaml ~/my_yaml_file.yaml "['a_key']")
It's possible to pass a small script to some interpreters, like Python. An easy way to do so using Ruby and its YAML library is the following:
$ RUBY_SCRIPT="data = YAML::load(STDIN.read); puts data['a']; puts data['b']"
$ echo -e '---\na: 1234\nb: 4321' | ruby -ryaml -e "$RUBY_SCRIPT"
1234
4321
, wheredata is a hash (or array) with the values from yaml.
As a bonus, it'll parse Jekyll's front matter just fine.
ruby -ryaml -e "puts YAML::load(open(ARGV.first).read)['tags']" example.md
here an extended version of the Stefan Farestam's answer:
function parse_yaml {
local prefix=$2
local s='[[:space:]]*' w='[a-zA-Z0-9_]*' fs=$(echo #|tr # '\034')
sed -ne "s|,$s\]$s\$|]|" \
-e ":1;s|^\($s\)\($w\)$s:$s\[$s\(.*\)$s,$s\(.*\)$s\]|\1\2: [\3]\n\1 - \4|;t1" \
-e "s|^\($s\)\($w\)$s:$s\[$s\(.*\)$s\]|\1\2:\n\1 - \3|;p" $1 | \
sed -ne "s|,$s}$s\$|}|" \
-e ":1;s|^\($s\)-$s{$s\(.*\)$s,$s\($w\)$s:$s\(.*\)$s}|\1- {\2}\n\1 \3: \4|;t1" \
-e "s|^\($s\)-$s{$s\(.*\)$s}|\1-\n\1 \2|;p" | \
sed -ne "s|^\($s\):|\1|" \
-e "s|^\($s\)-$s[\"']\(.*\)[\"']$s\$|\1$fs$fs\2|p" \
-e "s|^\($s\)-$s\(.*\)$s\$|\1$fs$fs\2|p" \
-e "s|^\($s\)\($w\)$s:$s[\"']\(.*\)[\"']$s\$|\1$fs\2$fs\3|p" \
-e "s|^\($s\)\($w\)$s:$s\(.*\)$s\$|\1$fs\2$fs\3|p" | \
awk -F$fs '{
indent = length($1)/2;
vname[indent] = $2;
for (i in vname) {if (i > indent) {delete vname[i]; idx[i]=0}}
if(length($2)== 0){ vname[indent]= ++idx[indent] };
if (length($3) > 0) {
vn=""; for (i=0; i<indent; i++) { vn=(vn)(vname[i])("_")}
printf("%s%s%s=\"%s\"\n", "'$prefix'",vn, vname[indent], $3);
}
}'
}
This version supports the - notation and the short notation for dictionaries and lists. The following input:
global:
input:
- "main.c"
- "main.h"
flags: [ "-O3", "-fpic" ]
sample_input:
- { property1: value, property2: "value2" }
- { property1: "value3", property2: 'value 4' }
produces this output:
global_input_1="main.c"
global_input_2="main.h"
global_flags_1="-O3"
global_flags_2="-fpic"
global_sample_input_1_property1="value"
global_sample_input_1_property2="value2"
global_sample_input_2_property1="value3"
global_sample_input_2_property2="value 4"
as you can see the - items automatically get numbered in order to obtain different variable names for each item. In bash there are no multidimensional arrays, so this is one way to work around. Multiple levels are supported.
To work around the problem with trailing white spaces mentioned by #briceburg one should enclose the values in single or double quotes. However, there are still some limitations: Expansion of the dictionaries and lists can produce wrong results when values contain commas. Also, more complex structures like values spanning multiple lines (like ssh-keys) are not (yet) supported.
A few words about the code: The first sed command expands the short form of dictionaries { key: value, ...} to regular and converts them to more simple yaml style. The second sed call does the same for the short notation of lists and converts [ entry, ... ] to an itemized list with the - notation. The third sed call is the original one that handled normal dictionaries, now with the addition to handle lists with - and indentations. The awk part introduces an index for each indentation level and increases it when the variable name is empty (i.e. when processing a list). The current value of the counters are used instead of the empty vname. When going up one level, the counters are zeroed.
Edit: I have created a github repository for this.
Moving my answer from How to convert a json response into yaml in bash, since this seems to be the authoritative post on dealing with YAML text parsing from command line.
I would like to add details about the yq YAML implementation. Since there are two implementations of this YAML parser lying around, both having the name yq, it is hard to differentiate which one is in use, without looking at the implementations' DSL. There two available implementations are
kislyuk/yq - The more often talked about version, which is a wrapper over jq, written in Python using the PyYAML library for YAML parsing
mikefarah/yq - A Go implementation, with its own dynamic DSL using the go-yaml v3 parser.
Both are available for installation via standard installation package managers on almost all major distributions
kislyuk/yq - Installation instructions
mikefarah/yq - Installation instructions
Both the versions have some pros and cons over the other, but a few valid points to highlight (adopted from their repo instructions)
kislyuk/yq
Since the DSL is the adopted completely from jq, for users familiar with the latter, the parsing and manipulation becomes quite straightforward
Supports mode to preserve YAML tags and styles, but loses comments during the conversion. Since jq doesn't preserve comments, during the round-trip conversion, the comments are lost.
As part of the package, XML support is built in. An executable, xq, which transcodes XML to JSON using xmltodict and pipes it to jq, on which you can apply the same DSL to perform CRUD operations on the objects and round-trip the output back to XML.
Supports in-place edit mode with -i flag (similar to sed -i)
mikefarah/yq
Prone to frequent changes in DSL, migration from 2.x - 3.x
Rich support for anchors, styles and tags. But lookout for bugs once in a while
A relatively simple Path expression syntax to navigate and match yaml nodes
Supports YAML->JSON, JSON->YAML formatting and pretty printing YAML (with comments)
Supports in-place edit mode with -i flag (similar to sed -i)
Supports coloring the output YAML with -C flag (not applicable for JSON output) and indentation of the sub elements (default at 2 spaces)
Supports Shell completion for most shells - Bash, zsh (because of powerful support from spf13/cobra used to generate CLI flags)
My take on the following YAML (referenced in other answer as well) with both the versions
root_key1: this is value one
root_key2: "this is value two"
drink:
state: liquid
coffee:
best_served: hot
colour: brown
orange_juice:
best_served: cold
colour: orange
food:
state: solid
apple_pie:
best_served: warm
root_key_3: this is value three
Various actions to be performed with both the implementations (some frequently used operations)
Modifying node value at root level - Change value of root_key2
Modifying array contents, adding value - Add property to coffee
Modifying array contents, deleting value - Delete property from orange_juice
Printing key/value pairs with paths - For all items under food
Using kislyuk/yq
yq -y '.root_key2 |= "this is a new value"' yaml
yq -y '.drink.coffee += { time: "always"}' yaml
yq -y 'del(.drink.orange_juice.colour)' yaml
yq -r '.food|paths(scalars) as $p | [($p|join(".")), (getpath($p)|tojson)] | #tsv' yaml
Which is pretty straightforward. All you need is to transcode jq JSON output back into YAML with the -y flag.
Using mikefarah/yq
yq w yaml root_key2 "this is a new value"
yq w yaml drink.coffee.time "always"
yq d yaml drink.orange_juice.colour
yq r yaml --printMode pv "food.**"
As of today Dec 21st 2020, yq v4 is in beta and supports much powerful path expressions and supports DSL similar to using jq. Read the transition notes - Upgrading from V3
I just wrote a parser that I called Yay! (Yaml ain't Yamlesque!) which parses Yamlesque, a small subset of YAML. So, if you're looking for a 100% compliant YAML parser for Bash then this isn't it. However, to quote the OP, if you want a structured configuration file which is as easy as possible for a non-technical user to edit that is YAML-like, this may be of interest.
It's inspred by the earlier answer but writes associative arrays (yes, it requires Bash 4.x) instead of basic variables. It does so in a way that allows the data to be parsed without prior knowledge of the keys so that data-driven code can be written.
As well as the key/value array elements, each array has a keys array containing a list of key names, a children array containing names of child arrays and a parent key that refers to its parent.
This is an example of Yamlesque:
root_key1: this is value one
root_key2: "this is value two"
drink:
state: liquid
coffee:
best_served: hot
colour: brown
orange_juice:
best_served: cold
colour: orange
food:
state: solid
apple_pie:
best_served: warm
root_key_3: this is value three
Here is an example showing how to use it:
#!/bin/bash
# An example showing how to use Yay
. /usr/lib/yay
# helper to get array value at key
value() { eval echo \${$1[$2]}; }
# print a data collection
print_collection() {
for k in $(value $1 keys)
do
echo "$2$k = $(value $1 $k)"
done
for c in $(value $1 children)
do
echo -e "$2$c\n$2{"
print_collection $c " $2"
echo "$2}"
done
}
yay example
print_collection example
which outputs:
root_key1 = this is value one
root_key2 = this is value two
root_key_3 = this is value three
example_drink
{
state = liquid
example_coffee
{
best_served = hot
colour = brown
}
example_orange_juice
{
best_served = cold
colour = orange
}
}
example_food
{
state = solid
example_apple_pie
{
best_served = warm
}
}
And here is the parser:
yay_parse() {
# find input file
for f in "$1" "$1.yay" "$1.yml"
do
[[ -f "$f" ]] && input="$f" && break
done
[[ -z "$input" ]] && exit 1
# use given dataset prefix or imply from file name
[[ -n "$2" ]] && local prefix="$2" || {
local prefix=$(basename "$input"); prefix=${prefix%.*}
}
echo "declare -g -A $prefix;"
local s='[[:space:]]*' w='[a-zA-Z0-9_]*' fs=$(echo #|tr # '\034')
sed -n -e "s|^\($s\)\($w\)$s:$s\"\(.*\)\"$s\$|\1$fs\2$fs\3|p" \
-e "s|^\($s\)\($w\)$s:$s\(.*\)$s\$|\1$fs\2$fs\3|p" "$input" |
awk -F$fs '{
indent = length($1)/2;
key = $2;
value = $3;
# No prefix or parent for the top level (indent zero)
root_prefix = "'$prefix'_";
if (indent ==0 ) {
prefix = ""; parent_key = "'$prefix'";
} else {
prefix = root_prefix; parent_key = keys[indent-1];
}
keys[indent] = key;
# remove keys left behind if prior row was indented more than this row
for (i in keys) {if (i > indent) {delete keys[i]}}
if (length(value) > 0) {
# value
printf("%s%s[%s]=\"%s\";\n", prefix, parent_key , key, value);
printf("%s%s[keys]+=\" %s\";\n", prefix, parent_key , key);
} else {
# collection
printf("%s%s[children]+=\" %s%s\";\n", prefix, parent_key , root_prefix, key);
printf("declare -g -A %s%s;\n", root_prefix, key);
printf("%s%s[parent]=\"%s%s\";\n", root_prefix, key, prefix, parent_key);
}
}'
}
# helper to load yay data file
yay() { eval $(yay_parse "$#"); }
There is some documentation in the linked source file and below is a short explanation of what the code does.
The yay_parse function first locates the input file or exits with an exit status of 1. Next, it determines the dataset prefix, either explicitly specified or derived from the file name.
It writes valid bash commands to its standard output that, if executed, define arrays representing the contents of the input data file. The first of these defines the top-level array:
echo "declare -g -A $prefix;"
Note that array declarations are associative (-A) which is a feature of Bash version 4. Declarations are also global (-g) so they can be executed in a function but be available to the global scope like the yay helper:
yay() { eval $(yay_parse "$#"); }
The input data is initially processed with sed. It drops lines that don't match the Yamlesque format specification before delimiting the valid Yamlesque fields with an ASCII File Separator character and removing any double-quotes surrounding the value field.
local s='[[:space:]]*' w='[a-zA-Z0-9_]*' fs=$(echo #|tr # '\034')
sed -n -e "s|^\($s\)\($w\)$s:$s\"\(.*\)\"$s\$|\1$fs\2$fs\3|p" \
-e "s|^\($s\)\($w\)$s:$s\(.*\)$s\$|\1$fs\2$fs\3|p" "$input" |
The two expressions are similar; they differ only because the first one picks out quoted values where as the second one picks out unquoted ones.
The File Separator (28/hex 12/octal 034) is used because, as a non-printable character, it is unlikely to be in the input data.
The result is piped into awk which processes its input one line at a time. It uses the FS character to assign each field to a variable:
indent = length($1)/2;
key = $2;
value = $3;
All lines have an indent (possibly zero) and a key but they don't all have a value. It computes an indent level for the line dividing the length of the first field, which contains the leading whitespace, by two. The top level items without any indent are at indent level zero.
Next, it works out what prefix to use for the current item. This is what gets added to a key name to make an array name. There's a root_prefix for the top-level array which is defined as the data set name and an underscore:
root_prefix = "'$prefix'_";
if (indent ==0 ) {
prefix = ""; parent_key = "'$prefix'";
} else {
prefix = root_prefix; parent_key = keys[indent-1];
}
The parent_key is the key at the indent level above the current line's indent level and represents the collection that the current line is part of. The collection's key/value pairs will be stored in an array with its name defined as the concatenation of the prefix and parent_key.
For the top level (indent level zero) the data set prefix is used as the parent key so it has no prefix (it's set to ""). All other arrays are prefixed with the root prefix.
Next, the current key is inserted into an (awk-internal) array containing the keys. This array persists throughout the whole awk session and therefore contains keys inserted by prior lines. The key is inserted into the array using its indent as the array index.
keys[indent] = key;
Because this array contains keys from previous lines, any keys with an indent level grater than the current line's indent level are removed:
for (i in keys) {if (i > indent) {delete keys[i]}}
This leaves the keys array containing the key-chain from the root at indent level 0 to the current line. It removes stale keys that remain when the prior line was indented deeper than the current line.
The final section outputs the bash commands: an input line without a value starts a new indent level (a collection in YAML parlance) and an input line with a value adds a key to the current collection.
The collection's name is the concatenation of the current line's prefix and parent_key.
When a key has a value, a key with that value is assigned to the current collection like this:
printf("%s%s[%s]=\"%s\";\n", prefix, parent_key , key, value);
printf("%s%s[keys]+=\" %s\";\n", prefix, parent_key , key);
The first statement outputs the command to assign the value to an associative array element named after the key and the second one outputs the command to add the key to the collection's space-delimited keys list:
<current_collection>[<key>]="<value>";
<current_collection>[keys]+=" <key>";
When a key doesn't have a value, a new collection is started like this:
printf("%s%s[children]+=\" %s%s\";\n", prefix, parent_key , root_prefix, key);
printf("declare -g -A %s%s;\n", root_prefix, key);
The first statement outputs the command to add the new collection to the current's collection's space-delimited children list and the second one outputs the command to declare a new associative array for the new collection:
<current_collection>[children]+=" <new_collection>"
declare -g -A <new_collection>;
All of the output from yay_parse can be parsed as bash commands by the bash eval or source built-in commands.
Hard to say because it depends on what you want the parser to extract from your YAML document. For simple cases, you might be able to use grep, cut, awk etc. For more complex parsing you would need to use a full-blown parsing library such as Python's PyYAML or YAML::Perl.
Another option is to convert the YAML to JSON, then use jq to interact with the JSON representation either to extract information from it or edit it.
I wrote a simple bash script that contains this glue - see Y2J project on GitHub
perl -ne 'chomp; printf qq/%s="%s"\n/, split(/\s*:\s*/,$_,2)' file.yml > file.sh
I used to convert yaml to json using python and do my processing in jq.
python -c "import yaml; import json; from pathlib import Path; print(json.dumps(yaml.safe_load(Path('file.yml').read_text())))" | jq '.'
If you need a single value you could a tool which converts your YAML document to JSON and feed to jq, for example yq.
Content of sample.yaml:
---
bob:
item1:
cats: bananas
item2:
cats: apples
thing:
cats: oranges
Example:
$ yq -r '.bob["thing"]["cats"]' sample.yaml
oranges
I know this is very specific, but I think my answer could be helpful for certain users.
If you have node and npm installed on your machine, you can use js-yaml.
First install :
npm i -g js-yaml
# or locally
npm i js-yaml
then in your bash script
#!/bin/bash
js-yaml your-yaml-file.yml
Also if you are using jq you can do something like that
#!/bin/bash
json="$(js-yaml your-yaml-file.yml)"
aproperty="$(jq '.apropery' <<< "$json")"
echo "$aproperty"
Because js-yaml converts a yaml file to a json string literal. You can then use the string with any json parser in your unix system.
A quick way to do the thing now (previous ones haven't worked for me):
sudo wget https://github.com/mikefarah/yq/releases/download/v4.4.1/yq_linux_amd64 -O /usr/bin/yq &&\
sudo chmod +x /usr/bin/yq
Example asd.yaml:
a_list:
- key1: value1
key2: value2
key3: value3
parsing root:
user#vm:~$ yq e '.' asd.yaml
a_list:
- key1: value1
key2: value2
key3: value3
parsing key3:
user#vm:~$ yq e '.a_list[0].key3' asd.yaml
value3
If you have python 2 and PyYAML, you can use this parser I wrote called parse_yaml.py. Some of the neater things it does is let you choose a prefix (in case you have more than one file with similar variables) and to pick a single value from a yaml file.
For example if you have these yaml files:
staging.yaml:
db:
type: sqllite
host: 127.0.0.1
user: dev
password: password123
prod.yaml:
db:
type: postgres
host: 10.0.50.100
user: postgres
password: password123
You can load both without conflict.
$ eval $(python parse_yaml.py prod.yaml --prefix prod --cap)
$ eval $(python parse_yaml.py staging.yaml --prefix stg --cap)
$ echo $PROD_DB_HOST
10.0.50.100
$ echo $STG_DB_HOST
127.0.0.1
And even cherry pick the values you want.
$ prod_user=$(python parse_yaml.py prod.yaml --get db_user)
$ prod_port=$(python parse_yaml.py prod.yaml --get db_port --default 5432)
$ echo prod_user
postgres
$ echo prod_port
5432
You could use an equivalent of yq that is written in golang:
./go-yg -yamlFile /home/user/dev/ansible-firefox/defaults/main.yml -key
firefox_version
returns:
62.0.3
Whenever you need a solution for "How to work with YAML/JSON/compatible data from a shell script" which works on just about every OS with Python (*nix, OSX, Windows), consider yamlpath, which provides several command-line tools for reading, writing, searching, and merging YAML, EYAML, JSON, and compatible files. Since just about every OS either comes with Python pre-installed or it is trivial to install, this makes yamlpath highly portable. Even more interesting: this project defines an intuitive path language with very powerful, command-line-friendly syntax that enables accessing one or more nodes.
To your specific question and after installing yamlpath using Python's native package manager or your OS's package manager (yamlpath is available via RPM to some OSes):
#!/bin/bash
# Read values directly from YAML (or EYAML, JSON, etc) for use in this shell script:
myShellVar=$(yaml-get --query=any.path.no[matter%how].complex source-file.yaml)
# Use the value any way you need:
echo "Retrieved ${myShellVar}"
# Perhaps change the value and write it back:
myShellVar="New Value"
yaml-set --change=/any/path/no[matter%how]/complex --value="$myShellVar" source-file.yaml
You didn't specify that the data was a simple Scalar value though, so let's up the ante. What if the result you want is an Array? Even more challenging, what if it's an Array-of-Hashes and you only want one property of each result? Suppose further that your data is actually spread out across multiple YAML files and you need all the results in a single query. That's a much more interesting question to demonstrate with. So, suppose you have these two YAML files:
File: data1.yaml
---
baubles:
- name: Doohickey
sku: 0-000-1
price: 4.75
weight: 2.7g
- name: Doodad
sku: 0-000-2
price: 10.5
weight: 5g
- name: Oddball
sku: 0-000-3
price: 25.99
weight: 25kg
File: data2.yaml
---
baubles:
- name: Fob
sku: 0-000-4
price: 0.99
weight: 18mg
- name: Doohickey
price: 10.5
- name: Oddball
sku: 0-000-3
description: This ball is odd
How would you report only the sku of every item in inventory after applying the changes from data2.yaml to data1.yaml, all from a shell script? Try this:
#!/bin/bash
baubleSKUs=($(yaml-merge --aoh=deep data1.yaml data2.yaml | yaml-get --query=/baubles/sku -))
for sku in "${baubleSKUs[#]}"; do
echo "Found bauble SKU: ${sku}"
done
You get exactly what you need from only a few lines of code:
Found bauble SKU: 0-000-1
Found bauble SKU: 0-000-2
Found bauble SKU: 0-000-3
Found bauble SKU: 0-000-4
As you can see, yamlpath turns very complex problems into trivial solutions. Note that the entire query was handled as a stream; no YAML files were changed by the query and there were no temp files.
I realize this is "yet another tool to solve the same question" but after reading the other answers here, yamlpath appears more portable and robust than most alternatives. It also fully understands YAML/JSON/compatible files and it does not need to convert YAML to JSON to perform requested operations. As such, comments within the original YAML file are preserved whenever you need to change data in the source YAML file. Like some alternatives, yamlpath is also portable across OSes. More importantly, yamlpath defines a query language that is extremely powerful, enabling very specialized/filtered data queries. It can even operate against results from disparate parts of the file in a single query.
If you want to get or set many values in the data at once -- including complex data like hashes/arrays/maps/lists -- yamlpath can do that. Want a value but don't know precisely where it is in the document? yamlpath can find it and give you the exact path(s). Need to merge multiple data file together, including from STDIN? yamlpath does that, too. Further, yamlpath fully comprehends YAML anchors and their aliases, always giving or changing exactly the data you expect whether it is a concrete or referenced value.
Disclaimer: I wrote and maintain yamlpath, which is based on ruamel.yaml, which is in turn based on PyYAML. As such, yamlpath is fully standards-compliant.
Complex parsing is easiest with a library such as Python's PyYAML or YAML::Perl.
If you want to parse all the YAML values into bash values, try this script. This will handle comments as well. See example usage below:
# pparse.py
import yaml
import sys
def parse_yaml(yml, name=''):
if isinstance(yml, list):
for data in yml:
parse_yaml(data, name)
elif isinstance(yml, dict):
if (len(yml) == 1) and not isinstance(yml[list(yml.keys())[0]], list):
print(str(name+'_'+list(yml.keys())[0]+'='+str(yml[list(yml.keys())[0]]))[1:])
else:
for key in yml:
parse_yaml(yml[key], name+'_'+key)
if __name__=="__main__":
yml = yaml.safe_load(open(sys.argv[1]))
parse_yaml(yml)
test.yml
- folders:
- temp_folder: datasets/outputs/tmp
- keep_temp_folder: false
- MFA:
- MFA: false
- speaker_count: 1
- G2P:
- G2P: true
- G2P_model: models/MFA/G2P/english_g2p.zip
- input_folder: datasets/outputs/Youtube/ljspeech/wavs
- output_dictionary: datasets/outputs/Youtube/ljspeech/dictionary.dict
- dictionary: datasets/outputs/Youtube/ljspeech/dictionary.dict
- acoustic_model: models/MFA/acoustic/english.zip
- temp_folder: datasets/outputs/tmp
- jobs: 4
- align:
- config: configs/MFA/align.yaml
- dataset: datasets/outputs/Youtube/ljspeech/wavs
- output_folder: datasets/outputs/Youtube/ljspeech-aligned
- TTS:
- output_folder: datasets/outputs/Youtube
- preprocess:
- preprocess: true
- config: configs/TTS_preprocess.yaml # Default Config
- textgrid_folder: datasets/outputs/Youtube/ljspeech-aligned
- output_duration_folder: datasets/outputs/Youtube/durations
- sampling_rate: 44000 # Make sure sampling rate is same here as in preprocess config
Script where YAML values are needed:
yaml() {
eval $(python pparse.py "$1")
}
yaml "test.yml"
# What python printed to bash:
folders_temp_folder=datasets/outputs/tmp
folders_keep_temp_folder=False
MFA_MFA=False
MFA_speaker_count=1
MFA_G2P_G2P=True
MFA_G2P_G2P_model=models/MFA/G2P/english_g2p.zip
MFA_G2P_input_folder=datasets/outputs/Youtube/ljspeech/wavs
MFA_G2P_output_dictionary=datasets/outputs/Youtube/ljspeech/dictionary.dict
MFA_dictionary=datasets/outputs/Youtube/ljspeech/dictionary.dict
MFA_acoustic_model=models/MFA/acoustic/english.zip
MFA_temp_folder=datasets/outputs/tmp
MFA_jobs=4
MFA_align_config=configs/MFA/align.yaml
MFA_align_dataset=datasets/outputs/Youtube/ljspeech/wavs
MFA_align_output_folder=datasets/outputs/Youtube/ljspeech-aligned
TTS_output_folder=datasets/outputs/Youtube
TTS_preprocess_preprocess=True
TTS_preprocess_config=configs/TTS_preprocess.yaml
TTS_preprocess_textgrid_folder=datasets/outputs/Youtube/ljspeech-aligned
TTS_preprocess_output_duration_folder=datasets/outputs/Youtube/durations
TTS_preprocess_sampling_rate=44000
Access variables with bash:
echo "$TTS_preprocess_sampling_rate";
>>> 44000
If you know what tags you are interested in and the yaml structure you expect then it is not that hard to write a simple YAML parser in Bash.
In the following example the parser reads a structured YAML file into environment variables, an array and an associative array.
Note: The complexity of this parser is tied to the structure of the YAML file. You will need a separate subroutine for each structured component of the YAML file. Highly structured YAML files might require a more sophisticated approach, eg a generic recursive descent parser.
The xmas.yaml file:
# Xmas YAML example
---
# Values
pear-tree: partridge
turtle-doves: 2.718
french-hens: 3
# Array
calling-birds:
- huey
- dewey
- louie
- fred
# Structure
xmas-fifth-day:
calling-birds: four
french-hens: 3
golden-rings: 5
partridges:
count: 1
location: "a pear tree"
turtle-doves: two
The parser uses mapfile to read the file into memory as an array then cycles through each tag and creates environment variables.
pear-tree:, turtle-doves: and french-hens: end up as simple environment variables
calling-birds: becomes an array
The xmas-fifth-day: structure is represented as an associative array however you could encode these as environment variables if you are not using Bash 4.0 or later.
Comments and white space are ignored.
#!/bin/bash
# -------------------------------------------------------------------
# A simple parser for the xmas.yaml file
# -------------------------------------------------------------------
#
# xmas.yaml tags
# # - Ignored
# - Blank lines are ignored
# --- - Initialiser for days-of-xmas
# pear-tree: partridge - a string
# turtle-doves: 2.718 - a string, no float type in Bash
# french-hens: 3 - a number
# calling-birds: - an array of strings
# - huey - calling-birds[0]
# - dewey
# - louie
# - fred
# xmas-fifth-day: - an associative array
# calling-birds: four - a string
# french-hens: 3 - a number
# golden-rings: 5 - a number
# partridges: - changes the key to partridges.xxx
# count: 1 - a number
# location: "a pear tree" - a string
# turtle-doves: two - a string
#
# This requires the following routines
# ParseXMAS
# parses #, ---, blank line
# unexpected tag error
# calls days-of-xmas
#
# days-of-xmas
# parses pear-tree, turtle-doves, french-hens
# calls calling-birds
# calls xmas-fifth-day
#
# calling-birds
# elements of the array
#
# xmas-fifth-day
# parses calling-birds, french-hens, golden-rings, turtle-doves
# calls partridges
#
# partridges
# parses partridges.count, partridges.location
#
function ParseXMAS()
{
# days-of-xmas
# parses pear-tree, turtle-doves, french-hens
# calls calling-birds
# calls xmas-fifth-day
#
function days-of-xmas()
{
unset PearTree TurtleDoves FrenchHens
while [ $CURRENT_ROW -lt $ROWS ]
do
LINE=( ${CONFIG[${CURRENT_ROW}]} )
TAG=${LINE[0]}
unset LINE[0]
VALUE="${LINE[*]}"
echo " days-of-xmas[${CURRENT_ROW}] ${TAG}=${VALUE}"
if [ "$TAG" = "pear-tree:" ]
then
declare -g PearTree=$VALUE
elif [ "$TAG" = "turtle-doves:" ]
then
declare -g TurtleDoves=$VALUE
elif [ "$TAG" = "french-hens:" ]
then
declare -g FrenchHens=$VALUE
elif [ "$TAG" = "calling-birds:" ]
then
let CURRENT_ROW=$(($CURRENT_ROW + 1))
calling-birds
continue
elif [ "$TAG" = "xmas-fifth-day:" ]
then
let CURRENT_ROW=$(($CURRENT_ROW + 1))
xmas-fifth-day
continue
elif [ -z "$TAG" ] || [ "$TAG" = "#" ]
then
# Ignore comments and blank lines
true
else
# time to bug out
break
fi
let CURRENT_ROW=$(($CURRENT_ROW + 1))
done
}
# calling-birds
# elements of the array
function calling-birds()
{
unset CallingBirds
declare -ag CallingBirds
while [ $CURRENT_ROW -lt $ROWS ]
do
LINE=( ${CONFIG[${CURRENT_ROW}]} )
TAG=${LINE[0]}
unset LINE[0]
VALUE="${LINE[*]}"
echo " calling-birds[${CURRENT_ROW}] ${TAG}=${VALUE}"
if [ "$TAG" = "-" ]
then
CallingBirds[${#CallingBirds[*]}]=$VALUE
elif [ -z "$TAG" ] || [ "$TAG" = "#" ]
then
# Ignore comments and blank lines
true
else
# time to bug out
break
fi
let CURRENT_ROW=$(($CURRENT_ROW + 1))
done
}
# xmas-fifth-day
# parses calling-birds, french-hens, golden-rings, turtle-doves
# calls fifth-day-partridges
#
function xmas-fifth-day()
{
unset XmasFifthDay
declare -Ag XmasFifthDay
while [ $CURRENT_ROW -lt $ROWS ]
do
LINE=( ${CONFIG[${CURRENT_ROW}]} )
TAG=${LINE[0]}
unset LINE[0]
VALUE="${LINE[*]}"
echo " xmas-fifth-day[${CURRENT_ROW}] ${TAG}=${VALUE}"
if [ "$TAG" = "calling-birds:" ]
then
XmasFifthDay[CallingBirds]=$VALUE
elif [ "$TAG" = "french-hens:" ]
then
XmasFifthDay[FrenchHens]=$VALUE
elif [ "$TAG" = "golden-rings:" ]
then
XmasFifthDay[GOLDEN-RINGS]=$VALUE
elif [ "$TAG" = "turtle-doves:" ]
then
XmasFifthDay[TurtleDoves]=$VALUE
elif [ "$TAG" = "partridges:" ]
then
let CURRENT_ROW=$(($CURRENT_ROW + 1))
partridges
continue
elif [ -z "$TAG" ] || [ "$TAG" = "#" ]
then
# Ignore comments and blank lines
true
else
# time to bug out
break
fi
let CURRENT_ROW=$(($CURRENT_ROW + 1))
done
}
function partridges()
{
while [ $CURRENT_ROW -lt $ROWS ]
do
LINE=( ${CONFIG[${CURRENT_ROW}]} )
TAG=${LINE[0]}
unset LINE[0]
VALUE="${LINE[*]}"
echo " partridges[${CURRENT_ROW}] ${TAG}=${VALUE}"
if [ "$TAG" = "count:" ]
then
XmasFifthDay[PARTRIDGES.COUNT]=$VALUE
elif [ "$TAG" = "location:" ]
then
XmasFifthDay[PARTRIDGES.LOCATION]=$VALUE
elif [ -z "$TAG" ] || [ "$TAG" = "#" ]
then
# Ignore comments and blank lines
true
else
# time to bug out
break
fi
let CURRENT_ROW=$(($CURRENT_ROW + 1))
done
}
# ===================================================================
# Load the configuration file
mapfile CONFIG < xmas.yaml
let ROWS=${#CONFIG[#]}
let CURRENT_ROW=0
# +
# #
#
# ---
# -
while [ $CURRENT_ROW -lt $ROWS ]
do
LINE=( ${CONFIG[${CURRENT_ROW}]} )
TAG=${LINE[0]}
unset LINE[0]
VALUE="${LINE[*]}"
echo "[${CURRENT_ROW}] ${TAG}=${VALUE}"
if [ "$TAG" = "---" ]
then
let CURRENT_ROW=$(($CURRENT_ROW + 1))
days-of-xmas
continue
elif [ -z "$TAG" ] || [ "$TAG" = "#" ]
then
# Ignore comments and blank lines
true
else
echo "Unexpected tag at line $(($CURRENT_ROW + 1)): <${TAG}>={${VALUE}}"
break
fi
let CURRENT_ROW=$(($CURRENT_ROW + 1))
done
}
echo =========================================
ParseXMAS
echo =========================================
declare -p PearTree
declare -p TurtleDoves
declare -p FrenchHens
declare -p CallingBirds
declare -p XmasFifthDay
This produces the following output
=========================================
[0] #=Xmas YAML example
[1] ---=
days-of-xmas[2] #=Values
days-of-xmas[3] pear-tree:=partridge
days-of-xmas[4] turtle-doves:=2.718
days-of-xmas[5] french-hens:=3
days-of-xmas[6] =
days-of-xmas[7] #=Array
days-of-xmas[8] calling-birds:=
calling-birds[9] -=huey
calling-birds[10] -=dewey
calling-birds[11] -=louie
calling-birds[12] -=fred
calling-birds[13] =
calling-birds[14] #=Structure
calling-birds[15] xmas-fifth-day:=
days-of-xmas[15] xmas-fifth-day:=
xmas-fifth-day[16] calling-birds:=four
xmas-fifth-day[17] french-hens:=3
xmas-fifth-day[18] golden-rings:=5
xmas-fifth-day[19] partridges:=
partridges[20] count:=1
partridges[21] location:="a pear tree"
partridges[22] turtle-doves:=two
xmas-fifth-day[22] turtle-doves:=two
=========================================
declare -- PearTree="partridge"
declare -- TurtleDoves="2.718"
declare -- FrenchHens="3"
declare -a CallingBirds=([0]="huey" [1]="dewey" [2]="louie" [3]="fred")
declare -A XmasFifthDay=([CallingBirds]="four" [PARTRIDGES.LOCATION]="\"a pear tree\"" [FrenchHens]="3" [GOLDEN-RINGS]="5" [PARTRIDGES.COUNT]="1" [TurtleDoves]="two" )
You can also consider using Grunt (The JavaScript Task Runner). Can be easily integrated with shell. It supports reading YAML (grunt.file.readYAML) and JSON (grunt.file.readJSON) files.
This can be achieved by creating a task in Gruntfile.js (or Gruntfile.coffee), e.g.:
module.exports = function (grunt) {
grunt.registerTask('foo', ['load_yml']);
grunt.registerTask('load_yml', function () {
var data = grunt.file.readYAML('foo.yml');
Object.keys(data).forEach(function (g) {
// ... switch (g) { case 'my_key':
});
});
};
then from shell just simply run grunt foo (check grunt --help for available tasks).
Further more you can implement exec:foo tasks (grunt-exec) with input variables passed from your task (foo: { cmd: 'echo bar <%= foo %>' }) in order to print the output in whatever format you want, then pipe it into another command.
There is also similar tool to Grunt, it's called gulp with additional plugin gulp-yaml.
Install via: npm install --save-dev gulp-yaml
Sample usage:
var yaml = require('gulp-yaml');
gulp.src('./src/*.yml')
.pipe(yaml())
.pipe(gulp.dest('./dist/'))
gulp.src('./src/*.yml')
.pipe(yaml({ space: 2 }))
.pipe(gulp.dest('./dist/'))
gulp.src('./src/*.yml')
.pipe(yaml({ safe: true }))
.pipe(gulp.dest('./dist/'))
To more options to deal with YAML format, check YAML site for available projects, libraries and other resources which can help you to parse that format.
Other tools:
Jshon
parses, reads and creates JSON
I know my answer is specific, but if one already has PHP and Symfony installed, it can be very handy to use Symfony's YAML parser.
For instance:
php -r "require '$SYMFONY_ROOT_PATH/vendor/autoload.php'; \
var_dump(\Symfony\Component\Yaml\Yaml::parse(file_get_contents('$YAML_FILE_PATH')));"
Here I simply used var_dump to output the parsed array but of course you can do much more... :)

Resources