Howto split a string on a multi-character delimiter in bash? - bash

Why doesn't work the following bash code?
for i in $( echo "emmbbmmaaddsb" | split -t "mm" )
do
echo "$i"
done
expected output:
e
bb
aaddsb

The recommended tool for character subtitution is sed's command s/regexp/replacement/ for one regexp occurence or global s/regexp/replacement/g, you do not even need a loop or variables.
Pipe your echo output and try to substitute the characters mm witht the newline character \n:
echo "emmbbmmaaddsb" | sed 's/mm/\n/g'
The output is:
e
bb
aaddsb

Since you're expecting newlines, you can simply replace all instances of mm in your string with a newline. In pure native bash:
in='emmbbmmaaddsb'
sep='mm'
printf '%s\n' "${in//$sep/$'\n'}"
If you wanted to do such a replacement on a longer input stream, you might be better off using awk, as bash's built-in string manipulation doesn't scale well to more than a few kilobytes of content. The gsub_literal shell function (backending into awk) given in BashFAQ #21 is applicable:
# Taken from http://mywiki.wooledge.org/BashFAQ/021
# usage: gsub_literal STR REP
# replaces all instances of STR with REP. reads from stdin and writes to stdout.
gsub_literal() {
# STR cannot be empty
[[ $1 ]] || return
# string manip needed to escape '\'s, so awk doesn't expand '\n' and such
awk -v str="${1//\\/\\\\}" -v rep="${2//\\/\\\\}" '
# get the length of the search string
BEGIN {
len = length(str);
}
{
# empty the output string
out = "";
# continue looping while the search string is in the line
while (i = index($0, str)) {
# append everything up to the search string, and the replacement string
out = out substr($0, 1, i-1) rep;
# remove everything up to and including the first instance of the
# search string from the line
$0 = substr($0, i + len);
}
# append whatever is left
out = out $0;
print out;
}
'
}
...used, in this context, as:
gsub_literal "mm" $'\n' <your-input-file.txt >your-output-file.txt

A more general example, without replacing the multi-character delimiter with a single character delimiter is given below :
Using parameter expansions : (from the comment of #gniourf_gniourf)
#!/bin/bash
str="LearnABCtoABCSplitABCaABCString"
delimiter=ABC
s=$str$delimiter
array=();
while [[ $s ]]; do
array+=( "${s%%"$delimiter"*}" );
s=${s#*"$delimiter"};
done;
declare -p array
A more crude kind of way
#!/bin/bash
# main string
str="LearnABCtoABCSplitABCaABCString"
# delimiter string
delimiter="ABC"
#length of main string
strLen=${#str}
#length of delimiter string
dLen=${#delimiter}
#iterator for length of string
i=0
#length tracker for ongoing substring
wordLen=0
#starting position for ongoing substring
strP=0
array=()
while [ $i -lt $strLen ]; do
if [ $delimiter == ${str:$i:$dLen} ]; then
array+=(${str:strP:$wordLen})
strP=$(( i + dLen ))
wordLen=0
i=$(( i + dLen ))
fi
i=$(( i + 1 ))
wordLen=$(( wordLen + 1 ))
done
array+=(${str:strP:$wordLen})
declare -p array
Reference - Bash Tutorial - Bash Split String

With awk you can use the gsub to replace all regex matches.
As in your question, to replace all substrings of two or more 'm' chars with a new line, run:
echo "emmbbmmaaddsb" | awk '{ gsub(/mm+/, "\n" ); print; }'
e
bb
aaddsb
The ‘g’ in gsub() stands for “global,” which means replace everywhere.
You may also ask to print just N match, for example:
echo "emmbbmmaaddsb" | awk '{ gsub(/mm+/, " " ); print $2; }'
bb

Related

Convert a key:value file w/ comments into JSON document with UNIX tools

I have a file in a subset of YAML with data such as the below:
# This is a comment
# This is another comment
spark:spark.ui.enabled: 'false'
spark:spark.sql.adaptive.enabled: 'true'
yarn:yarn.nodemanager.log.retain-seconds: '259200'
I need to convert that into a JSON document looking like this (note that strings containing booleans and integers still remain strings):
{
"spark:spark.ui.enabled": "false",
"spark:spark.sql.adaptive.enabled": "true",
"yarn:yarn.nodemanager.log.retain-seconds", "259200"
}
The closest I got was this:
cat << EOF > ./file.yaml
> # This is a comment
> # This is another comment
>
>
> spark:spark.ui.enabled: 'false'
> spark:spark.sql.adaptive.enabled: 'true'
> yarn:yarn.nodemanager.log.retain-seconds: '259200'
> EOF
echo {$(cat file.yaml | grep -o '^[^#]*' | sed '/^$/d' | awk -F": " '{sub($1, "\"&\""); print}' | paste -sd "," - )}
which apart from looking rather gnarly doesn't give the correct answer, it returns:
{"spark:spark.ui.enabled": 'false',"spark:spark.sql.adaptive.enabled": 'true',"dataproc:dataproc.monitoring.stackdriver.enable": 'true',"spark:spark.submit.deployMode": 'cluster'}
which, if I pipe to jq causes a parse error.
I'm hoping I'm missing a much much easier way of doing this but I can't figure it out. Can anyone help?
Implemented in pure jq (tested with version 1.6):
#!/usr/bin/env bash
jq_script=$(cat <<'EOF'
def content_for_line:
"^[[:space:]]*([#]|$)" as $ignore_re | # regex for comments, blank lines
"^(?<key>.*): (?<value>.*)$" as $content_re | # regex for actual k/v pairs
"^'(?<value>.*)'$" as $quoted_re | # regex for values in single quotes
if test($ignore_re) then {} else # empty lines add nothing to the data
if test($content_re) then ( # non-empty: match against $content_re
capture($content_re) as $content | # ...and put the groups into $content
$content.key as $key | # string before ": " becomes $key
(if ($content.value | test($quoted_re)) then # if value contains literal quotes...
($content.value | capture($quoted_re)).value # ...take string from inside quotes
else
$content.value # no quotes to strip
end) as $value | # result of the above block becomes $value
{"\($key)": "\($value)"} # and return a map from one key to one value
) else
# we get here if a line didn't match $ignore_re *or* $content_re
error("Line \(.) is not recognized as a comment, empty, or valid content")
end
end;
# iterate over our input lines, passing each one to content_for_line and merging the result
# into the object we're building, which we eventually return as our result.
reduce inputs as $item ({}; . + ($item | content_for_line))
EOF
)
# jq -R: read input as raw strings
# jq -n: don't read from stdin until requested with "input" or "inputs"
jq -Rn "$jq_script" <file.yaml >file.json
Unlike syntax-unaware tools, this can never generate output that isn't valid JSON; and it can easily be extended with application-specific logic (f/e, to emit some values but not others as numeric literals rather than string literals) by adding an additional filter stage to inspect and modify the output of content_for_line.
Here's a no-frills but simple solution:
def tidy: sub("^ *'?";"") | sub(" *'?$";"");
def kv: split(":") | [ (.[:-1] | join(":")), (.[-1]|tidy)];
reduce (inputs| select( test("^ *#|^ *$")|not) | kv) as $row ({};
.[$row[0]] = $row[1] )
Invocation
jq -n -R -f tojson.jq input.txt
You can do it all in awk using gsub and sprintf, for example:
(edit to add "," separating json records)
awk 'BEGIN {ol=0; print "{" }
/^[^#]/ {
if (ol) print ","
gsub ("\047", "\042")
$1 = sprintf (" \"%s\":", substr ($1, 1, length ($1) - 1))
printf "%s %s", $1, $2
ol++
}
END { print "\n}" }' file.yaml
(note: though jq is the proper tool for json formatting)
Explanation
awk 'BEGIN { ol=0; print "{" } call awk setting the output line variable ol=0 for "," output control and printing the header "{",
/^[^#]/ { only match non-comment lines,
if (ol) print "," if the output line ol is greater than zero, output a trailing ","
gsub ("\047", "\042") replace all single-quotes with double-quotes,
$1 = sprintf (" \"%s\":", substr ($1, 1, length ($1) - 1)) add 2 leading spaces and double-quotes around the first field (except for the last character) and then append a ':' at the end.
print $1, $2 output the reformatted fields,
ol++ increment the output line count, and
END { print "}" }' close by printing the "}" footer
Example Use/Output
Just select/paste the awk command above (changing the filename as needed)
$ awk 'BEGIN {ol=0; print "{" }
> /^[^#]/ {
> if (ol) print ","
> gsub ("\047", "\042")
> $1 = sprintf (" \"%s\":", substr ($1, 1, length ($1) - 1))
> printf "%s %s", $1, $2
> ol++
> }
> END { print "\n}" }' file.yaml
{
"spark:spark.ui.enabled": "false",
"spark:spark.sql.adaptive.enabled": "true"
}

sed Capital_Case not working

I'm trying to convert a string that has either - (hyphen) or _ (underscore) to Capital_Case string.
#!/usr/bin/env sh
function cap_case() {
[ $# -eq 1 ] || return 1;
_str=$1;
_capitalize=${_str//[-_]/_} | sed -E 's/(^|_)([a-zA-Z])/\u\2/g'
echo "Capitalize:"
echo $_capitalize
return 0
}
read string
echo $(cap_case $string)
But I don't get anything out.
First I am replacing any occurrence of - and _ with _ ${_str//[-_]/_}, and then I pipe that string to sed which finds the first letter, or _ as the first group, and then the letter after the first group in the second group, and I want to uppercase the found letter with \u\2. I tried with \U\2 but that didn't work as well.
I want the string some_string to become
Some_String
And string some-string to become
Some_String
I'm on a mac, using zsh if that is helpful.
EDIT: More generic solution here to make each field's first letter Capital.
echo "some_string_other" | awk -F"_" '{for(i=1;i<=NF;i++){$i=toupper(substr($i,1,1)) substr($i,2)}} 1' OFS="_"
Following awk may help you.
echo "some_string" | awk -F"_" '{$1=toupper(substr($1,1,1)) substr($1,2);$2=toupper(substr($2,1,1)) substr($2,2)} 1' OFS="_"
Output will be as follows.
echo "some_string" | awk -F"_" '{$1=toupper(substr($1,1,1)) substr($1,2);$2=toupper(substr($2,1,1)) substr($2,2)} 1' OFS="_"
Some_String
This being zsh, you don't need sed (or even a function, really):
$ s=some-string-bar
$ print ${(C)s:gs/-/_}
Some_String_Bar
The (C) flag capitalizes words (where "words" are defined as sequences of alphanumeric characters separated by other characters); :gs/-/_ replaces hyphens with underscores.
If you really want a function, it's cap_case () { print ${(C)1:gs/-/_} }.
pure bash:
#!/bin/bash
camel_case(){
local d display string
declare -a strings # = scope local
[ "$2" ] && d="$2" || d=" " # optional output delimiter
ifs_ini="$IFS"
IFS+='_-' # we keep initial IFS
strings=( "$1" ) # array
for string in ${strings[#]} ; do
display+="${string^}$d"
done
echo "${display%$d}"
IFS="$ifs_ini"
}
camel_case "some-string_here" "_"
camel_case "some-string_here some strings here" "+"
camel_case "some-string_here some strings here"
echo "$BASH_VERSION"
exit
output:
Some_String_Here
Some+String+Here+Some+Strings+Here
Some String Here Some Strings Here
4.4.18(1) release
You can try this gnu sed
echo 'some_other-string' | sed -E 's/(^.)/\u&/;s/[_-](.)/_\u\1/g'
Explains :
s/(^.)/\u&/
(^.) match the first char and \u& put the match in capital letter.
s/[_-](.)/_\u\1/g
[_-](.) capture a char preceded by _ or - and replace it by _ and the matched char in capital letter.
The g at the end tell sed to make the replacement for each char which meet the criteria
You didn't assign to _capitalize - you set a _capitalize environment variable for the empty command that you piped into sed.
You probably meant
_capitalize=$(<<<"${_str//[-_]/_}" sed -E 's/(^|_)([a-zA-Z])/\1\u\2/g')
Note also that ${//} isn't standard shell, so you really ought to specify an interpreter other than sh.
A simpler approach would be simply:
#!/bin/sh
cap_case() {
printf "Capitalize: "
echo "$*" | sed -e 'y/-/_/' -e 's/\(^\|_\)[[:alpha:]]/\U&/g'
}
echo $(cap_case "snake_case")
Note that the \u / \U replacement is a GNU extension to sed - if you're using a non-GNU implementation, check whether it supports this feature.

Creating users from .txt file [duplicate]

Why doesn't work the following bash code?
for i in $( echo "emmbbmmaaddsb" | split -t "mm" )
do
echo "$i"
done
expected output:
e
bb
aaddsb
The recommended tool for character subtitution is sed's command s/regexp/replacement/ for one regexp occurence or global s/regexp/replacement/g, you do not even need a loop or variables.
Pipe your echo output and try to substitute the characters mm witht the newline character \n:
echo "emmbbmmaaddsb" | sed 's/mm/\n/g'
The output is:
e
bb
aaddsb
Since you're expecting newlines, you can simply replace all instances of mm in your string with a newline. In pure native bash:
in='emmbbmmaaddsb'
sep='mm'
printf '%s\n' "${in//$sep/$'\n'}"
If you wanted to do such a replacement on a longer input stream, you might be better off using awk, as bash's built-in string manipulation doesn't scale well to more than a few kilobytes of content. The gsub_literal shell function (backending into awk) given in BashFAQ #21 is applicable:
# Taken from http://mywiki.wooledge.org/BashFAQ/021
# usage: gsub_literal STR REP
# replaces all instances of STR with REP. reads from stdin and writes to stdout.
gsub_literal() {
# STR cannot be empty
[[ $1 ]] || return
# string manip needed to escape '\'s, so awk doesn't expand '\n' and such
awk -v str="${1//\\/\\\\}" -v rep="${2//\\/\\\\}" '
# get the length of the search string
BEGIN {
len = length(str);
}
{
# empty the output string
out = "";
# continue looping while the search string is in the line
while (i = index($0, str)) {
# append everything up to the search string, and the replacement string
out = out substr($0, 1, i-1) rep;
# remove everything up to and including the first instance of the
# search string from the line
$0 = substr($0, i + len);
}
# append whatever is left
out = out $0;
print out;
}
'
}
...used, in this context, as:
gsub_literal "mm" $'\n' <your-input-file.txt >your-output-file.txt
A more general example, without replacing the multi-character delimiter with a single character delimiter is given below :
Using parameter expansions : (from the comment of #gniourf_gniourf)
#!/bin/bash
str="LearnABCtoABCSplitABCaABCString"
delimiter=ABC
s=$str$delimiter
array=();
while [[ $s ]]; do
array+=( "${s%%"$delimiter"*}" );
s=${s#*"$delimiter"};
done;
declare -p array
A more crude kind of way
#!/bin/bash
# main string
str="LearnABCtoABCSplitABCaABCString"
# delimiter string
delimiter="ABC"
#length of main string
strLen=${#str}
#length of delimiter string
dLen=${#delimiter}
#iterator for length of string
i=0
#length tracker for ongoing substring
wordLen=0
#starting position for ongoing substring
strP=0
array=()
while [ $i -lt $strLen ]; do
if [ $delimiter == ${str:$i:$dLen} ]; then
array+=(${str:strP:$wordLen})
strP=$(( i + dLen ))
wordLen=0
i=$(( i + dLen ))
fi
i=$(( i + 1 ))
wordLen=$(( wordLen + 1 ))
done
array+=(${str:strP:$wordLen})
declare -p array
Reference - Bash Tutorial - Bash Split String
With awk you can use the gsub to replace all regex matches.
As in your question, to replace all substrings of two or more 'm' chars with a new line, run:
echo "emmbbmmaaddsb" | awk '{ gsub(/mm+/, "\n" ); print; }'
e
bb
aaddsb
The ‘g’ in gsub() stands for “global,” which means replace everywhere.
You may also ask to print just N match, for example:
echo "emmbbmmaaddsb" | awk '{ gsub(/mm+/, " " ); print $2; }'
bb

Looping through multiline CSV rows in bash

I have the following csv file with 3 columns:
row1value1,row1value2,"row1
multi
line
value"
row2value1,row2value2,"row2
multi
line
value"
Is there a way to loop through its rows like (this does not work, it reads lines):
while read $ROW
do
#some code that uses $ROW variable
done < file.csv
Using gnu-awk you can do this using FPAT:
awk -v RS='"\n' -v FPAT='"[^"]*"|[^,]*' '{
print "Record #", NR, " =======>"
for (i=1; i<=NF; i++) {
sub(/^"/, "", $i)
printf "Field # %d, value=[%s]\n", i, $i
}
}' file.csv
Record # 1 =======>
Field # 1, value=[row1value1]
Field # 2, value=[row1value2]
Field # 3, value=[row1
multi
line
value]
Record # 2 =======>
Field # 1, value=[row2value1]
Field # 2, value=[row2value2]
Field # 3, value=[row2
multi
line
value]
However, as I commented above a dedicated CSV parser using PHP, Perl or Python will be more robust for this job.
Here is a pure bash solution. The multiline_csv.sh script translates the multiline csv into standard csv by replacing the newline characters between quotes with some replacement string. So the usage is
./multiline_csv.sh CSVFILE SEP
I placed your example script in a file called ./multi.csv. Running the command ./multiline_csv.sh ./multi.csv "\n" yielded the following output
[ericthewry#eric-arch-pc stackoverflow]$ ./multiline_csv.sh ./multi.csv "\n"
r1c2,r1c2,"row1\nmulti\nline\nvalue"
r2c1,r2c2,"row2\nmultiline\nvalue"
This can be easily translated back to the original csv file using printf:
[ericthewry#eric-arch-pc stackoverflow]$ printf "$(./multiline_csv.sh ./multi.csv "\n")\n"
r1c2,r1c2,"row1
multi
line
value"
r2c1,r2c2,"row2
multiline
value"
This might be an Arch-specific quirk of echo/sprintf (I'm not sure), but you could use some other separator string like ~~~++??//NEWLINE\\??++~~~ that you could sed out if need be.
# multiline_csv.sh
open=0
line_is_open(){
quote="$2"
(printf "$1" | sed -e "s/\(.\)/\1\n/g") | (while read char; do
if [[ "$char" = '"' ]]; then
open=$((($open + 1) % 2))
fi
done && echo $open)
}
cat "$1" | while read ln ; do
flatline="${ln}"
open=$(line_is_open "${ln}" $open)
until [[ "$open" = "0" ]]; do
if read newln
then
flatline="${flatline}$2${newln}"
open=$(line_is_open "${newln}" $open)
else
break
fi
done
echo "${flatline}"
done
Once you've done this translation, you can proceed as you would normally via the while read $ROW do ... done method.

Parse out key=value pairs into variables

I have a bunch of different kinds of files I need to look at periodically, and what they have in common is that the lines have a bunch of key=value type strings. So something like:
Version=2 Len=17 Hello Var=Howdy Other
I would like to be able to reference the names directly from awk... so something like:
cat some_file | ... | awk '{print Var, $5}' # prints Howdy Other
How can I go about doing that?
The closest you can get is to parse the variables into an associative array first thing every line. That is to say,
awk '{ delete vars; for(i = 1; i <= NF; ++i) { n = index($i, "="); if(n) { vars[substr($i, 1, n - 1)] = substr($i, n + 1) } } Var = vars["Var"] } { print Var, $5 }'
More readably:
{
delete vars; # clean up previous variable values
for(i = 1; i <= NF; ++i) { # walk through fields
n = index($i, "="); # search for =
if(n) { # if there is one:
# remember value by name. The reason I use
# substr over split is the possibility of
# something like Var=foo=bar=baz (that will
# be parsed into a variable Var with the
# value "foo=bar=baz" this way).
vars[substr($i, 1, n - 1)] = substr($i, n + 1)
}
}
# if you know precisely what variable names you expect to get, you can
# assign to them here:
Var = vars["Var"]
Version = vars["Version"]
Len = vars["Len"]
}
{
print Var, $5 # then use them in the rest of the code
}
$ cat file | sed -r 's/[[:alnum:]]+=/\n&/g' | awk -F= '$1=="Var"{print $2}'
Howdy Other
Or, avoiding the useless use of cat:
$ sed -r 's/[[:alnum:]]+=/\n&/g' file | awk -F= '$1=="Var"{print $2}'
Howdy Other
How it works
sed -r 's/[[:alnum:]]+=/\n&/g'
This places each key,value pair on its own line.
awk -F= '$1=="Var"{print $2}'
This reads the key-value pairs. Since the field separator is chosen to be =, the key ends up as field 1 and the value as field 2. Thus, we just look for lines whose first field is Var and print the corresponding value.
Since discussion in commentary has made it clear that a pure-bash solution would also be acceptable:
#!/bin/bash
case $BASH_VERSION in
''|[0-3].*) echo "ERROR: Bash 4.0 required" >&2; exit 1;;
esac
while read -r -a words; do # iterate over lines of input
declare -A vars=( ) # refresh variables for each line
set -- "${words[#]}" # update positional parameters
for word; do
if [[ $word = *"="* ]]; then # if a word contains an "="...
vars[${word%%=*}]=${word#*=} # ...then set it as an associative-array key
fi
done
echo "${vars[Var]} $5" # Here, we use content read from that line.
done <<<"Version=2 Len=17 Hello Var=Howdy Other"
The <<<"Input Here" could also be <file.txt, in which case lines in the file would be iterated over.
If you wanted to use $Var instead of ${vars[Var]}, then substitute printf -v "${word%%=*}" %s "${word*=}" in place of vars[${word%%=*}]=${word#*=}, and remove references to vars elsewhere. Note that this doesn't allow for a good way to clean up variables between lines of input, as the associative-array approach does.
I will try to explain you a very generic way to do this which you can adapt easily if you want to print out other stuff.
Assume you have a string which has a format like this:
key1=value1 key2=value2 key3=value3
or more generic
key1_fs2_value1_fs1_key2_fs2_value2_fs1_key3_fs2_value3
With fs1 and fs2 two different field separators.
You would like to make a selection or some operations with these values. To do this, the easiest is to store these in an associative array:
array["key1"] => value1
array["key2"] => value2
array["key3"] => value3
array["key1","full"] => "key1=value1"
array["key2","full"] => "key2=value2"
array["key3","full"] => "key3=value3"
This can be done with the following function in awk:
function str2map(str,fs1,fs2,map, n,tmp) {
n=split(str,map,fs1)
for (;n>0;n--) {
split(map[n],tmp,fs2);
map[tmp[1]]=tmp[2]; map[tmp[1],"full"]=map[n]
delete map[n]
}
}
So, after processing the string, you have the full flexibility to do operations in any way you like:
awk '
function str2map(str,fs1,fs2,map, n,tmp) {
n=split(str,map,fs1)
for (;n>0;n--) {
split(map[n],tmp,fs2);
map[tmp[1]]=tmp[2]; map[tmp[1],"full"]=map[n]
delete map[n]
}
}
{ str2map($0," ","=",map) }
{ print map["Var","full"] }
' file
The advantage of this method is that you can easily adapt your code to print any other key you are interested in, or even make selections based on this, example:
(map["Version"] < 3) { print map["var"]/map["Len"] }
The simplest and easiest way is to use the string substitution like this:
property='my.password.is=1234567890=='
name=${property%%=*}
value=${property#*=}
echo "'$name' : '$value'"
The output is:
'my.password.is' : '1234567890=='
Yore.
Using bash's set command, we can split the line into positional parameters like awk.
For each word, we'll try to read a name value pair delimited by =.
When we find a value, assign it to the variable named $key using bash's printf -v feature.
#!/usr/bin/env bash
line='Version=2 Len=17 Hello Var=Howdy Other'
set $line
for word in "$#"; do
IFS='=' read -r key val <<< "$word"
test -n "$val" && printf -v "$key" "$val"
done
echo "$Var $5"
output
Howdy Other
SYNOPSIS
an awk-based solution that doesn't require manually checking the fields to locate the desired key pair :
approach being avoid splitting unnecessary fields or arrays - only performing regex match via function call when needed
only returning FIRST occurrence of input key value. Subsequent matches along the row are NOT returned
i just called it S() cuz it's the closest letter to $
I only included an array (_) of the 3 test values for demo purposes. Those aren't needed. In fact, no state information is being kept at all
caveat being : key-match must be exact - this version of the code isn't for case-insensitive or fuzzy/agile matching
Tested and confirmed working on
- gawk 5.1.1
- mawk 1.3.4
- mawk-2/1.9.9.6
- macos nawk
CODE
# gawk profile, created Fri May 27 02:07:53 2022
{m,n,g}awk '
function S(__,_) {
return \
! match($(_=_<_), "(^|["(_="[:blank:]]")")"(__)"[=][^"(_)"*") \
? "^$" \
: substr(__=substr($-_, RSTART, RLENGTH), index(__,"=")+_^!_)
}
BEGIN { OFS = "\f" # This array is only for testing
_["Version"] _["Len"] _["Var"] # purposes. Feel free to discard at will
} {
for (__ in _) {
print __, S(__) } }'
OUTPUT
Var
Howdy
Len
17
Version
2
So either call the fields in BAU fashion
- $5, $0, $NF, etc
or call S(QUOTED_KEY_VALUE), case-sensitive, like
As a safeguard, to prevent mis-interpreting null strings
or invalid inputs as $0, a non-match returns ^$
instead of empty string
S("Version") to get back 2.
As a bonus, it can safely handle values in multibyte unicode, both for values and even for keys, regardless of whether ur awk is UTF-8-aware or not :
1 ✜
🤡
2 Version
2
3 Var
Howdy
4 Len
17
5 ✜=🤡 Version=2 Len=17 Hello Var=Howdy Other
I know this is particularly regarding awk but mentioning this as many people come here for solutions to break down name = value pairs ( with / without using awk as such).
I found below way simple straight forward and very effective in managing multiple spaces / commas as well -
Source: http://jayconrod.com/posts/35/parsing-keyvalue-pairs-in-bash
change="foo=red bar=green baz=blue"
#use below if var is in CSV (instead of space as delim)
change=`echo $change | tr ',' ' '`
for change in $changes; do
set -- `echo $change | tr '=' ' '`
echo "variable name == $1 and variable value == $2"
#can assign value to a variable like below
eval my_var_$1=$2;
done

Resources