How to extract patterns from a file and fill an bash array with them? - bash

My intent is to write a shell script to extract a pattern ,using regular expressions, from a file and fill an array with all the ocurrences of the pattern in order to foreach it.
What is the best way to achieve this?
I am trying to do it using sed. And a problem I am facing is that the patterns can have newlines and these newlines must be considered, eg:
File content:
"My name
is XXX"
"My name is YYY"
"Today
is
the "
When I extract all patterns between double quotes, including the double quotes, the output of the first ocurrence must be:
"My name
is XXX"

fill an array with all the ocurrences of the pattern
First convert your file to have meaningful delimiter, ex. null byte, with ex. GNU sed with -z switch:
sed -z 's/"\([^"]*\)"[^"]*/\1\00/g'
I've added the [^"]* on the end, so that characters not between " are removed.
After it it becomes more trivial to parse it.
You can get the first element with:
head -z -n1
Or sort and count the occurrences:
sort -z | uniq -z -c
Or load to an array with bash's maparray:
maparray -d '' -t arr < <(<input sed -z 's/"\([^"]*\)"[^"]*/\1\00/'g))
Alternatively you can use ex. $'\01' as the separator, as long as it's unique, it becomes simple to parse such data in bash.
Handling such streams is a bit hard in bash. You can't set variable value in shell with embedded null byte. Also expect sometimes warnings on command substitutions. Usually when handling data with arbitrary bytes, I convert it with xxd -p to plain ascii and back with xxd -r -p. With that, it becomes easier.
The following script:
cat <<'EOF' >input
"My name
is XXX"
"My name is YYY"
"Today
is
the "
EOF
sed -z 's/"\([^"]*\)"[^"]*/\1\x00/g' input > input_parsed
echo "##First element is:"
printf '"'
<input_parsed head -z -n1
printf '"\n'
echo "##Elemets count are:"
<input_parsed sort -z | uniq -z -c
echo
echo "##The array is:"
mapfile -d '' -t arr <input_parsed
declare -p arr
will output (the formatting is a bit off, because of the non-newline delimetered output from uniq):
##First element is:
"My name
is XXX"
##Elemets count are:
1 My name
is XXX 1 My name is YYY 1 Today
is
the
##The array is:
declare -a arr=([0]=$'My name\nis XXX' [1]="My name is YYY" [2]=$'Today\nis\nthe ')
Tested on repl.it.

This may be what you're looking for, depending on the answers to the questions I posted in a comment:
$ readarray -d '' -t arr < <(grep -zo '"[^"]*"' file)
$ printf '%s\n' "${arr[0]}"
"My name
is XXX"
$ declare -p arr
declare -a arr=([0]=$'"My name \nis XXX"' [1]="\"My name is YYY\"" [2]=$'"Today\nis\nthe "')
It uses GNU grep for -z.

Sed can extract your desired pattern with or without newlines.
But if you want to store the multiple results into a bash array,
it may be easier to make use of bash regex.
Then please try the following:
lines=$(< "file") # slurp all lines
re='"[^"]+"' # regex to match substring between double quotes
while [[ $lines =~ ($re)(.*) ]]; do
array+=("${BASH_REMATCH[1]}") # push the matched pattern to the array
lines=${BASH_REMATCH[2]} # update $lines with the remaining part
done
# report the result
for (( i=0; i<${#array[#]}; i++ )); do
echo "$i: ${array[$i]}"
done
Output:
0: "My name
is XXX"
1: "My name is YYY"
2: "Today
is
the "

Related

How do I select each information in one line with delimiters [duplicate]

I have this string stored in a variable:
IN="bla#some.com;john#home.com"
Now I would like to split the strings by ; delimiter so that I have:
ADDR1="bla#some.com"
ADDR2="john#home.com"
I don't necessarily need the ADDR1 and ADDR2 variables. If they are elements of an array that's even better.
After suggestions from the answers below, I ended up with the following which is what I was after:
#!/usr/bin/env bash
IN="bla#some.com;john#home.com"
mails=$(echo $IN | tr ";" "\n")
for addr in $mails
do
echo "> [$addr]"
done
Output:
> [bla#some.com]
> [john#home.com]
There was a solution involving setting Internal_field_separator (IFS) to ;. I am not sure what happened with that answer, how do you reset IFS back to default?
RE: IFS solution, I tried this and it works, I keep the old IFS and then restore it:
IN="bla#some.com;john#home.com"
OIFS=$IFS
IFS=';'
mails2=$IN
for x in $mails2
do
echo "> [$x]"
done
IFS=$OIFS
BTW, when I tried
mails2=($IN)
I only got the first string when printing it in loop, without brackets around $IN it works.
You can set the internal field separator (IFS) variable, and then let it parse into an array. When this happens in a command, then the assignment to IFS only takes place to that single command's environment (to read ). It then parses the input according to the IFS variable value into an array, which we can then iterate over.
This example will parse one line of items separated by ;, pushing it into an array:
IFS=';' read -ra ADDR <<< "$IN"
for i in "${ADDR[#]}"; do
# process "$i"
done
This other example is for processing the whole content of $IN, each time one line of input separated by ;:
while IFS=';' read -ra ADDR; do
for i in "${ADDR[#]}"; do
# process "$i"
done
done <<< "$IN"
Taken from Bash shell script split array:
IN="bla#some.com;john#home.com"
arrIN=(${IN//;/ })
echo ${arrIN[1]} # Output: john#home.com
Explanation:
This construction replaces all occurrences of ';' (the initial // means global replace) in the string IN with ' ' (a single space), then interprets the space-delimited string as an array (that's what the surrounding parentheses do).
The syntax used inside of the curly braces to replace each ';' character with a ' ' character is called Parameter Expansion.
There are some common gotchas:
If the original string has spaces, you will need to use IFS:
IFS=':'; arrIN=($IN); unset IFS;
If the original string has spaces and the delimiter is a new line, you can set IFS with:
IFS=$'\n'; arrIN=($IN); unset IFS;
I've seen a couple of answers referencing the cut command, but they've all been deleted. It's a little odd that nobody has elaborated on that, because I think it's one of the more useful commands for doing this type of thing, especially for parsing delimited log files.
In the case of splitting this specific example into a bash script array, tr is probably more efficient, but cut can be used, and is more effective if you want to pull specific fields from the middle.
Example:
$ echo "bla#some.com;john#home.com" | cut -d ";" -f 1
bla#some.com
$ echo "bla#some.com;john#home.com" | cut -d ";" -f 2
john#home.com
You can obviously put that into a loop, and iterate the -f parameter to pull each field independently.
This gets more useful when you have a delimited log file with rows like this:
2015-04-27|12345|some action|an attribute|meta data
cut is very handy to be able to cat this file and select a particular field for further processing.
If you don't mind processing them immediately, I like to do this:
for i in $(echo $IN | tr ";" "\n")
do
# process
done
You could use this kind of loop to initialize an array, but there's probably an easier way to do it.
Compatible answer
There are a lot of different ways to do this in bash.
However, it's important to first note that bash has many special features (so-called bashisms) that won't work in any other shell.
In particular, arrays, associative arrays, and pattern substitution, which are used in the solutions in this post as well as others in the thread, are bashisms and may not work under other shells that many people use.
For instance: on my Debian GNU/Linux, there is a standard shell called dash; I know many people who like to use another shell called ksh; and there is also a special tool called busybox with his own shell interpreter (ash).
For posix shell compatible answer, go to last part of this answer!
Requested string
The string to be split in the above question is:
IN="bla#some.com;john#home.com"
I will use a modified version of this string to ensure that my solution is robust to strings containing whitespace, which could break other solutions:
IN="bla#some.com;john#home.com;Full Name <fulnam#other.org>"
Split string based on delimiter in bash (version >=4.2)
In pure bash, we can create an array with elements split by a temporary value for IFS (the input field separator). The IFS, among other things, tells bash which character(s) it should treat as a delimiter between elements when defining an array:
IN="bla#some.com;john#home.com;Full Name <fulnam#other.org>"
# save original IFS value so we can restore it later
oIFS="$IFS"
IFS=";"
declare -a fields=($IN)
IFS="$oIFS"
unset oIFS
In newer versions of bash, prefixing a command with an IFS definition changes the IFS for that command only and resets it to the previous value immediately afterwards. This means we can do the above in just one line:
IFS=\; read -a fields <<<"$IN"
# after this command, the IFS resets back to its previous value (here, the default):
set | grep ^IFS=
# IFS=$' \t\n'
We can see that the string IN has been stored into an array named fields, split on the semicolons:
set | grep ^fields=\\\|^IN=
# fields=([0]="bla#some.com" [1]="john#home.com" [2]="Full Name <fulnam#other.org>")
# IN='bla#some.com;john#home.com;Full Name <fulnam#other.org>'
(We can also display the contents of these variables using declare -p:)
declare -p IN fields
# declare -- IN="bla#some.com;john#home.com;Full Name <fulnam#other.org>"
# declare -a fields=([0]="bla#some.com" [1]="john#home.com" [2]="Full Name <fulnam#other.org>")
Note that read is the quickest way to do the split because there are no forks or external resources called.
Once the array is defined, you can use a simple loop to process each field (or, rather, each element in the array you've now defined):
# `"${fields[#]}"` expands to return every element of `fields` array as a separate argument
for x in "${fields[#]}" ;do
echo "> [$x]"
done
# > [bla#some.com]
# > [john#home.com]
# > [Full Name <fulnam#other.org>]
Or you could drop each field from the array after processing using a shifting approach, which I like:
while [ "$fields" ] ;do
echo "> [$fields]"
# slice the array
fields=("${fields[#]:1}")
done
# > [bla#some.com]
# > [john#home.com]
# > [Full Name <fulnam#other.org>]
And if you just want a simple printout of the array, you don't even need to loop over it:
printf "> [%s]\n" "${fields[#]}"
# > [bla#some.com]
# > [john#home.com]
# > [Full Name <fulnam#other.org>]
Update: recent bash >= 4.4
In newer versions of bash, you can also play with the command mapfile:
mapfile -td \; fields < <(printf "%s\0" "$IN")
This syntax preserve special chars, newlines and empty fields!
If you don't want to include empty fields, you could do the following:
mapfile -td \; fields <<<"$IN"
fields=("${fields[#]%$'\n'}") # drop '\n' added by '<<<'
With mapfile, you can also skip declaring an array and implicitly "loop" over the delimited elements, calling a function on each:
myPubliMail() {
printf "Seq: %6d: Sending mail to '%s'..." $1 "$2"
# mail -s "This is not a spam..." "$2" </path/to/body
printf "\e[3D, done.\n"
}
mapfile < <(printf "%s\0" "$IN") -td \; -c 1 -C myPubliMail
(Note: the \0 at end of the format string is useless if you don't care about empty fields at end of the string or they're not present.)
mapfile < <(echo -n "$IN") -td \; -c 1 -C myPubliMail
# Seq: 0: Sending mail to 'bla#some.com', done.
# Seq: 1: Sending mail to 'john#home.com', done.
# Seq: 2: Sending mail to 'Full Name <fulnam#other.org>', done.
Or you could use <<<, and in the function body include some processing to drop the newline it adds:
myPubliMail() {
local seq=$1 dest="${2%$'\n'}"
printf "Seq: %6d: Sending mail to '%s'..." $seq "$dest"
# mail -s "This is not a spam..." "$dest" </path/to/body
printf "\e[3D, done.\n"
}
mapfile <<<"$IN" -td \; -c 1 -C myPubliMail
# Renders the same output:
# Seq: 0: Sending mail to 'bla#some.com', done.
# Seq: 1: Sending mail to 'john#home.com', done.
# Seq: 2: Sending mail to 'Full Name <fulnam#other.org>', done.
Split string based on delimiter in shell
If you can't use bash, or if you want to write something that can be used in many different shells, you often can't use bashisms -- and this includes the arrays we've been using in the solutions above.
However, we don't need to use arrays to loop over "elements" of a string. There is a syntax used in many shells for deleting substrings of a string from the first or last occurrence of a pattern. Note that * is a wildcard that stands for zero or more characters:
(The lack of this approach in any solution posted so far is the main reason I'm writing this answer ;)
${var#*SubStr} # drops substring from start of string up to first occurrence of `SubStr`
${var##*SubStr} # drops substring from start of string up to last occurrence of `SubStr`
${var%SubStr*} # drops substring from last occurrence of `SubStr` to end of string
${var%%SubStr*} # drops substring from first occurrence of `SubStr` to end of string
As explained by Score_Under:
# and % delete the shortest possible matching substring from the start and end of the string respectively, and
## and %% delete the longest possible matching substring.
Using the above syntax, we can create an approach where we extract substring "elements" from the string by deleting the substrings up to or after the delimiter.
The codeblock below works well in bash (including Mac OS's bash), dash, ksh, lksh, yash, zsh, and busybox's ash:
(Thanks to Adam Katz's comment, making this loop a lot simplier!)
IN="bla#some.com;john#home.com;Full Name <fulnam#other.org>"
while [ "$IN" != "$iter" ] ;do
# extract the substring from start of string up to delimiter.
iter=${IN%%;*}
# delete this first "element" AND next separator, from $IN.
IN="${IN#$iter;}"
# Print (or doing anything with) the first "element".
printf '> [%s]\n' "$iter"
done
# > [bla#some.com]
# > [john#home.com]
# > [Full Name <fulnam#other.org>]
Why not cut?
cut is usefull for extracting columns in big files, but doing forks repetitively (var=$(echo ... | cut ...)) become quickly overkill!
Here is a correct syntax, tested under many posix shell using cut, as suggested by This other answer from DougW:
IN="bla#some.com;john#home.com;Full Name <fulnam#other.org>"
i=1
while iter=$(echo "$IN"|cut -d\; -f$i) ; [ -n "$iter" ] ;do
printf '> [%s]\n' "$iter"
i=$((i+1))
done
I wrote this in order to compare execution time.
On my raspberrypi, this look like:
$ export TIMEFORMAT=$'(%U + %S) / \e[1m%R\e[0m : %P '
$ time sh splitDemo.sh >/dev/null
(0.000 + 0.019) / 0.019 : 99.63
$ time sh splitDemo_cut.sh >/dev/null
(0.051 + 0.041) / 0.188 : 48.98
Where overall execution time is something like 10x longer, using 1 forks to cut, by field!
This worked for me:
string="1;2"
echo $string | cut -d';' -f1 # output is 1
echo $string | cut -d';' -f2 # output is 2
I think AWK is the best and efficient command to resolve your problem. AWK is included by default in almost every Linux distribution.
echo "bla#some.com;john#home.com" | awk -F';' '{print $1,$2}'
will give
bla#some.com john#home.com
Of course your can store each email address by redefining the awk print field.
How about this approach:
IN="bla#some.com;john#home.com"
set -- "$IN"
IFS=";"; declare -a Array=($*)
echo "${Array[#]}"
echo "${Array[0]}"
echo "${Array[1]}"
Source
echo "bla#some.com;john#home.com" | sed -e 's/;/\n/g'
bla#some.com
john#home.com
This also works:
IN="bla#some.com;john#home.com"
echo ADD1=`echo $IN | cut -d \; -f 1`
echo ADD2=`echo $IN | cut -d \; -f 2`
Be careful, this solution is not always correct. In case you pass "bla#some.com" only, it will assign it to both ADD1 and ADD2.
A different take on Darron's answer, this is how I do it:
IN="bla#some.com;john#home.com"
read ADDR1 ADDR2 <<<$(IFS=";"; echo $IN)
How about this one liner, if you're not using arrays:
IFS=';' read ADDR1 ADDR2 <<<$IN
In Bash, a bullet proof way, that will work even if your variable contains newlines:
IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in")
Look:
$ in=$'one;two three;*;there is\na newline\nin this field'
$ IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in")
$ declare -p array
declare -a array='([0]="one" [1]="two three" [2]="*" [3]="there is
a newline
in this field")'
The trick for this to work is to use the -d option of read (delimiter) with an empty delimiter, so that read is forced to read everything it's fed. And we feed read with exactly the content of the variable in, with no trailing newline thanks to printf. Note that's we're also putting the delimiter in printf to ensure that the string passed to read has a trailing delimiter. Without it, read would trim potential trailing empty fields:
$ in='one;two;three;' # there's an empty field
$ IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in")
$ declare -p array
declare -a array='([0]="one" [1]="two" [2]="three" [3]="")'
the trailing empty field is preserved.
Update for Bash≥4.4
Since Bash 4.4, the builtin mapfile (aka readarray) supports the -d option to specify a delimiter. Hence another canonical way is:
mapfile -d ';' -t array < <(printf '%s;' "$in")
Without setting the IFS
If you just have one colon you can do that:
a="foo:bar"
b=${a%:*}
c=${a##*:}
you will get:
b = foo
c = bar
Here is a clean 3-liner:
in="foo#bar;bizz#buzz;fizz#buzz;buzz#woof"
IFS=';' list=($in)
for item in "${list[#]}"; do echo $item; done
where IFS delimit words based on the separator and () is used to create an array. Then [#] is used to return each item as a separate word.
If you've any code after that, you also need to restore $IFS, e.g. unset IFS.
The following Bash/zsh function splits its first argument on the delimiter given by the second argument:
split() {
local string="$1"
local delimiter="$2"
if [ -n "$string" ]; then
local part
while read -d "$delimiter" part; do
echo $part
done <<< "$string"
echo $part
fi
}
For instance, the command
$ split 'a;b;c' ';'
yields
a
b
c
This output may, for instance, be piped to other commands. Example:
$ split 'a;b;c' ';' | cat -n
1 a
2 b
3 c
Compared to the other solutions given, this one has the following advantages:
IFS is not overriden: Due to dynamic scoping of even local variables, overriding IFS over a loop causes the new value to leak into function calls performed from within the loop.
Arrays are not used: Reading a string into an array using read requires the flag -a in Bash and -A in zsh.
If desired, the function may be put into a script as follows:
#!/usr/bin/env bash
split() {
# ...
}
split "$#"
you can apply awk to many situations
echo "bla#some.com;john#home.com"|awk -F';' '{printf "%s\n%s\n", $1, $2}'
also you can use this
echo "bla#some.com;john#home.com"|awk -F';' '{print $1,$2}' OFS="\n"
There is a simple and smart way like this:
echo "add:sfff" | xargs -d: -i echo {}
But you must use gnu xargs, BSD xargs cant support -d delim. If you use apple mac like me. You can install gnu xargs :
brew install findutils
then
echo "add:sfff" | gxargs -d: -i echo {}
So many answers and so many complexities. Try out a simpler solution:
echo "string1, string2" | tr , "\n"
tr (read, translate) replaces the first argument with the second argument in the input.
So tr , "\n" replace the comma with new line character in the input and it becomes:
string1
string2
There are some cool answers here (errator esp.), but for something analogous to split in other languages -- which is what I took the original question to mean -- I settled on this:
IN="bla#some.com;john#home.com"
declare -a a="(${IN//;/ })";
Now ${a[0]}, ${a[1]}, etc, are as you would expect. Use ${#a[*]} for number of terms. Or to iterate, of course:
for i in ${a[*]}; do echo $i; done
IMPORTANT NOTE:
This works in cases where there are no spaces to worry about, which solved my problem, but may not solve yours. Go with the $IFS solution(s) in that case.
If no space, Why not this?
IN="bla#some.com;john#home.com"
arr=(`echo $IN | tr ';' ' '`)
echo ${arr[0]}
echo ${arr[1]}
This is the simplest way to do it.
spo='one;two;three'
OIFS=$IFS
IFS=';'
spo_array=($spo)
IFS=$OIFS
echo ${spo_array[*]}
Apart from the fantastic answers that were already provided, if it is just a matter of printing out the data you may consider using awk:
awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "$IN"
This sets the field separator to ;, so that it can loop through the fields with a for loop and print accordingly.
Test
$ IN="bla#some.com;john#home.com"
$ awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "$IN"
> [bla#some.com]
> [john#home.com]
With another input:
$ awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "a;b;c d;e_;f"
> [a]
> [b]
> [c d]
> [e_]
> [f]
IN="bla#some.com;john#home.com"
IFS=';'
read -a IN_arr <<< "${IN}"
for entry in "${IN_arr[#]}"
do
echo $entry
done
Output
bla#some.com
john#home.com
System : Ubuntu 12.04.1
Use the set built-in to load up the $# array:
IN="bla#some.com;john#home.com"
IFS=';'; set $IN; IFS=$' \t\n'
Then, let the party begin:
echo $#
for a; do echo $a; done
ADDR1=$1 ADDR2=$2
Two bourne-ish alternatives where neither require bash arrays:
Case 1: Keep it nice and simple: Use a NewLine as the Record-Separator... eg.
IN="bla#some.com
john#home.com"
while read i; do
# process "$i" ... eg.
echo "[email:$i]"
done <<< "$IN"
Note: in this first case no sub-process is forked to assist with list manipulation.
Idea: Maybe it is worth using NL extensively internally, and only converting to a different RS when generating the final result externally.
Case 2: Using a ";" as a record separator... eg.
NL="
" IRS=";" ORS=";"
conv_IRS() {
exec tr "$1" "$NL"
}
conv_ORS() {
exec tr "$NL" "$1"
}
IN="bla#some.com;john#home.com"
IN="$(conv_IRS ";" <<< "$IN")"
while read i; do
# process "$i" ... eg.
echo -n "[email:$i]$ORS"
done <<< "$IN"
In both cases a sub-list can be composed within the loop is persistent after the loop has completed. This is useful when manipulating lists in memory, instead storing lists in files. {p.s. keep calm and carry on B-) }
In Android shell, most of the proposed methods just do not work:
$ IFS=':' read -ra ADDR <<<"$PATH"
/system/bin/sh: can't create temporary file /sqlite_stmt_journals/mksh.EbNoR10629: No such file or directory
What does work is:
$ for i in ${PATH//:/ }; do echo $i; done
/sbin
/vendor/bin
/system/sbin
/system/bin
/system/xbin
where // means global replacement.
IN='bla#some.com;john#home.com;Charlie Brown <cbrown#acme.com;!"#$%&/()[]{}*? are no problem;simple is beautiful :-)'
set -f
oldifs="$IFS"
IFS=';'; arrayIN=($IN)
IFS="$oldifs"
for i in "${arrayIN[#]}"; do
echo "$i"
done
set +f
Output:
bla#some.com
john#home.com
Charlie Brown <cbrown#acme.com
!"#$%&/()[]{}*? are no problem
simple is beautiful :-)
Explanation: Simple assignment using parenthesis () converts semicolon separated list into an array provided you have correct IFS while doing that. Standard FOR loop handles individual items in that array as usual.
Notice that the list given for IN variable must be "hard" quoted, that is, with single ticks.
IFS must be saved and restored since Bash does not treat an assignment the same way as a command. An alternate workaround is to wrap the assignment inside a function and call that function with a modified IFS. In that case separate saving/restoring of IFS is not needed. Thanks for "Bize" for pointing that out.
Here's my answer!
DELIMITER_VAL='='
read -d '' F_ABOUT_DISTRO_R <<"EOF"
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.4 LTS"
NAME="Ubuntu"
VERSION="14.04.4 LTS, Trusty Tahr"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 14.04.4 LTS"
VERSION_ID="14.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
EOF
SPLIT_NOW=$(awk -F$DELIMITER_VAL '{for(i=1;i<=NF;i++){printf "%s\n", $i}}' <<<"${F_ABOUT_DISTRO_R}")
while read -r line; do
SPLIT+=("$line")
done <<< "$SPLIT_NOW"
for i in "${SPLIT[#]}"; do
echo "$i"
done
Why this approach is "the best" for me?
Because of two reasons:
You do not need to escape the delimiter;
You will not have problem with blank spaces. The value will be properly separated in the array.
A one-liner to split a string separated by ';' into an array is:
IN="bla#some.com;john#home.com"
ADDRS=( $(IFS=";" echo "$IN") )
echo ${ADDRS[0]}
echo ${ADDRS[1]}
This only sets IFS in a subshell, so you don't have to worry about saving and restoring its value.

Bash script to filter value from key value [duplicate]

I have this string stored in a variable:
IN="bla#some.com;john#home.com"
Now I would like to split the strings by ; delimiter so that I have:
ADDR1="bla#some.com"
ADDR2="john#home.com"
I don't necessarily need the ADDR1 and ADDR2 variables. If they are elements of an array that's even better.
After suggestions from the answers below, I ended up with the following which is what I was after:
#!/usr/bin/env bash
IN="bla#some.com;john#home.com"
mails=$(echo $IN | tr ";" "\n")
for addr in $mails
do
echo "> [$addr]"
done
Output:
> [bla#some.com]
> [john#home.com]
There was a solution involving setting Internal_field_separator (IFS) to ;. I am not sure what happened with that answer, how do you reset IFS back to default?
RE: IFS solution, I tried this and it works, I keep the old IFS and then restore it:
IN="bla#some.com;john#home.com"
OIFS=$IFS
IFS=';'
mails2=$IN
for x in $mails2
do
echo "> [$x]"
done
IFS=$OIFS
BTW, when I tried
mails2=($IN)
I only got the first string when printing it in loop, without brackets around $IN it works.
You can set the internal field separator (IFS) variable, and then let it parse into an array. When this happens in a command, then the assignment to IFS only takes place to that single command's environment (to read ). It then parses the input according to the IFS variable value into an array, which we can then iterate over.
This example will parse one line of items separated by ;, pushing it into an array:
IFS=';' read -ra ADDR <<< "$IN"
for i in "${ADDR[#]}"; do
# process "$i"
done
This other example is for processing the whole content of $IN, each time one line of input separated by ;:
while IFS=';' read -ra ADDR; do
for i in "${ADDR[#]}"; do
# process "$i"
done
done <<< "$IN"
Taken from Bash shell script split array:
IN="bla#some.com;john#home.com"
arrIN=(${IN//;/ })
echo ${arrIN[1]} # Output: john#home.com
Explanation:
This construction replaces all occurrences of ';' (the initial // means global replace) in the string IN with ' ' (a single space), then interprets the space-delimited string as an array (that's what the surrounding parentheses do).
The syntax used inside of the curly braces to replace each ';' character with a ' ' character is called Parameter Expansion.
There are some common gotchas:
If the original string has spaces, you will need to use IFS:
IFS=':'; arrIN=($IN); unset IFS;
If the original string has spaces and the delimiter is a new line, you can set IFS with:
IFS=$'\n'; arrIN=($IN); unset IFS;
I've seen a couple of answers referencing the cut command, but they've all been deleted. It's a little odd that nobody has elaborated on that, because I think it's one of the more useful commands for doing this type of thing, especially for parsing delimited log files.
In the case of splitting this specific example into a bash script array, tr is probably more efficient, but cut can be used, and is more effective if you want to pull specific fields from the middle.
Example:
$ echo "bla#some.com;john#home.com" | cut -d ";" -f 1
bla#some.com
$ echo "bla#some.com;john#home.com" | cut -d ";" -f 2
john#home.com
You can obviously put that into a loop, and iterate the -f parameter to pull each field independently.
This gets more useful when you have a delimited log file with rows like this:
2015-04-27|12345|some action|an attribute|meta data
cut is very handy to be able to cat this file and select a particular field for further processing.
If you don't mind processing them immediately, I like to do this:
for i in $(echo $IN | tr ";" "\n")
do
# process
done
You could use this kind of loop to initialize an array, but there's probably an easier way to do it.
Compatible answer
There are a lot of different ways to do this in bash.
However, it's important to first note that bash has many special features (so-called bashisms) that won't work in any other shell.
In particular, arrays, associative arrays, and pattern substitution, which are used in the solutions in this post as well as others in the thread, are bashisms and may not work under other shells that many people use.
For instance: on my Debian GNU/Linux, there is a standard shell called dash; I know many people who like to use another shell called ksh; and there is also a special tool called busybox with his own shell interpreter (ash).
For posix shell compatible answer, go to last part of this answer!
Requested string
The string to be split in the above question is:
IN="bla#some.com;john#home.com"
I will use a modified version of this string to ensure that my solution is robust to strings containing whitespace, which could break other solutions:
IN="bla#some.com;john#home.com;Full Name <fulnam#other.org>"
Split string based on delimiter in bash (version >=4.2)
In pure bash, we can create an array with elements split by a temporary value for IFS (the input field separator). The IFS, among other things, tells bash which character(s) it should treat as a delimiter between elements when defining an array:
IN="bla#some.com;john#home.com;Full Name <fulnam#other.org>"
# save original IFS value so we can restore it later
oIFS="$IFS"
IFS=";"
declare -a fields=($IN)
IFS="$oIFS"
unset oIFS
In newer versions of bash, prefixing a command with an IFS definition changes the IFS for that command only and resets it to the previous value immediately afterwards. This means we can do the above in just one line:
IFS=\; read -a fields <<<"$IN"
# after this command, the IFS resets back to its previous value (here, the default):
set | grep ^IFS=
# IFS=$' \t\n'
We can see that the string IN has been stored into an array named fields, split on the semicolons:
set | grep ^fields=\\\|^IN=
# fields=([0]="bla#some.com" [1]="john#home.com" [2]="Full Name <fulnam#other.org>")
# IN='bla#some.com;john#home.com;Full Name <fulnam#other.org>'
(We can also display the contents of these variables using declare -p:)
declare -p IN fields
# declare -- IN="bla#some.com;john#home.com;Full Name <fulnam#other.org>"
# declare -a fields=([0]="bla#some.com" [1]="john#home.com" [2]="Full Name <fulnam#other.org>")
Note that read is the quickest way to do the split because there are no forks or external resources called.
Once the array is defined, you can use a simple loop to process each field (or, rather, each element in the array you've now defined):
# `"${fields[#]}"` expands to return every element of `fields` array as a separate argument
for x in "${fields[#]}" ;do
echo "> [$x]"
done
# > [bla#some.com]
# > [john#home.com]
# > [Full Name <fulnam#other.org>]
Or you could drop each field from the array after processing using a shifting approach, which I like:
while [ "$fields" ] ;do
echo "> [$fields]"
# slice the array
fields=("${fields[#]:1}")
done
# > [bla#some.com]
# > [john#home.com]
# > [Full Name <fulnam#other.org>]
And if you just want a simple printout of the array, you don't even need to loop over it:
printf "> [%s]\n" "${fields[#]}"
# > [bla#some.com]
# > [john#home.com]
# > [Full Name <fulnam#other.org>]
Update: recent bash >= 4.4
In newer versions of bash, you can also play with the command mapfile:
mapfile -td \; fields < <(printf "%s\0" "$IN")
This syntax preserve special chars, newlines and empty fields!
If you don't want to include empty fields, you could do the following:
mapfile -td \; fields <<<"$IN"
fields=("${fields[#]%$'\n'}") # drop '\n' added by '<<<'
With mapfile, you can also skip declaring an array and implicitly "loop" over the delimited elements, calling a function on each:
myPubliMail() {
printf "Seq: %6d: Sending mail to '%s'..." $1 "$2"
# mail -s "This is not a spam..." "$2" </path/to/body
printf "\e[3D, done.\n"
}
mapfile < <(printf "%s\0" "$IN") -td \; -c 1 -C myPubliMail
(Note: the \0 at end of the format string is useless if you don't care about empty fields at end of the string or they're not present.)
mapfile < <(echo -n "$IN") -td \; -c 1 -C myPubliMail
# Seq: 0: Sending mail to 'bla#some.com', done.
# Seq: 1: Sending mail to 'john#home.com', done.
# Seq: 2: Sending mail to 'Full Name <fulnam#other.org>', done.
Or you could use <<<, and in the function body include some processing to drop the newline it adds:
myPubliMail() {
local seq=$1 dest="${2%$'\n'}"
printf "Seq: %6d: Sending mail to '%s'..." $seq "$dest"
# mail -s "This is not a spam..." "$dest" </path/to/body
printf "\e[3D, done.\n"
}
mapfile <<<"$IN" -td \; -c 1 -C myPubliMail
# Renders the same output:
# Seq: 0: Sending mail to 'bla#some.com', done.
# Seq: 1: Sending mail to 'john#home.com', done.
# Seq: 2: Sending mail to 'Full Name <fulnam#other.org>', done.
Split string based on delimiter in shell
If you can't use bash, or if you want to write something that can be used in many different shells, you often can't use bashisms -- and this includes the arrays we've been using in the solutions above.
However, we don't need to use arrays to loop over "elements" of a string. There is a syntax used in many shells for deleting substrings of a string from the first or last occurrence of a pattern. Note that * is a wildcard that stands for zero or more characters:
(The lack of this approach in any solution posted so far is the main reason I'm writing this answer ;)
${var#*SubStr} # drops substring from start of string up to first occurrence of `SubStr`
${var##*SubStr} # drops substring from start of string up to last occurrence of `SubStr`
${var%SubStr*} # drops substring from last occurrence of `SubStr` to end of string
${var%%SubStr*} # drops substring from first occurrence of `SubStr` to end of string
As explained by Score_Under:
# and % delete the shortest possible matching substring from the start and end of the string respectively, and
## and %% delete the longest possible matching substring.
Using the above syntax, we can create an approach where we extract substring "elements" from the string by deleting the substrings up to or after the delimiter.
The codeblock below works well in bash (including Mac OS's bash), dash, ksh, lksh, yash, zsh, and busybox's ash:
(Thanks to Adam Katz's comment, making this loop a lot simplier!)
IN="bla#some.com;john#home.com;Full Name <fulnam#other.org>"
while [ "$IN" != "$iter" ] ;do
# extract the substring from start of string up to delimiter.
iter=${IN%%;*}
# delete this first "element" AND next separator, from $IN.
IN="${IN#$iter;}"
# Print (or doing anything with) the first "element".
printf '> [%s]\n' "$iter"
done
# > [bla#some.com]
# > [john#home.com]
# > [Full Name <fulnam#other.org>]
Why not cut?
cut is usefull for extracting columns in big files, but doing forks repetitively (var=$(echo ... | cut ...)) become quickly overkill!
Here is a correct syntax, tested under many posix shell using cut, as suggested by This other answer from DougW:
IN="bla#some.com;john#home.com;Full Name <fulnam#other.org>"
i=1
while iter=$(echo "$IN"|cut -d\; -f$i) ; [ -n "$iter" ] ;do
printf '> [%s]\n' "$iter"
i=$((i+1))
done
I wrote this in order to compare execution time.
On my raspberrypi, this look like:
$ export TIMEFORMAT=$'(%U + %S) / \e[1m%R\e[0m : %P '
$ time sh splitDemo.sh >/dev/null
(0.000 + 0.019) / 0.019 : 99.63
$ time sh splitDemo_cut.sh >/dev/null
(0.051 + 0.041) / 0.188 : 48.98
Where overall execution time is something like 10x longer, using 1 forks to cut, by field!
This worked for me:
string="1;2"
echo $string | cut -d';' -f1 # output is 1
echo $string | cut -d';' -f2 # output is 2
I think AWK is the best and efficient command to resolve your problem. AWK is included by default in almost every Linux distribution.
echo "bla#some.com;john#home.com" | awk -F';' '{print $1,$2}'
will give
bla#some.com john#home.com
Of course your can store each email address by redefining the awk print field.
How about this approach:
IN="bla#some.com;john#home.com"
set -- "$IN"
IFS=";"; declare -a Array=($*)
echo "${Array[#]}"
echo "${Array[0]}"
echo "${Array[1]}"
Source
echo "bla#some.com;john#home.com" | sed -e 's/;/\n/g'
bla#some.com
john#home.com
This also works:
IN="bla#some.com;john#home.com"
echo ADD1=`echo $IN | cut -d \; -f 1`
echo ADD2=`echo $IN | cut -d \; -f 2`
Be careful, this solution is not always correct. In case you pass "bla#some.com" only, it will assign it to both ADD1 and ADD2.
A different take on Darron's answer, this is how I do it:
IN="bla#some.com;john#home.com"
read ADDR1 ADDR2 <<<$(IFS=";"; echo $IN)
How about this one liner, if you're not using arrays:
IFS=';' read ADDR1 ADDR2 <<<$IN
In Bash, a bullet proof way, that will work even if your variable contains newlines:
IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in")
Look:
$ in=$'one;two three;*;there is\na newline\nin this field'
$ IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in")
$ declare -p array
declare -a array='([0]="one" [1]="two three" [2]="*" [3]="there is
a newline
in this field")'
The trick for this to work is to use the -d option of read (delimiter) with an empty delimiter, so that read is forced to read everything it's fed. And we feed read with exactly the content of the variable in, with no trailing newline thanks to printf. Note that's we're also putting the delimiter in printf to ensure that the string passed to read has a trailing delimiter. Without it, read would trim potential trailing empty fields:
$ in='one;two;three;' # there's an empty field
$ IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in")
$ declare -p array
declare -a array='([0]="one" [1]="two" [2]="three" [3]="")'
the trailing empty field is preserved.
Update for Bash≥4.4
Since Bash 4.4, the builtin mapfile (aka readarray) supports the -d option to specify a delimiter. Hence another canonical way is:
mapfile -d ';' -t array < <(printf '%s;' "$in")
Without setting the IFS
If you just have one colon you can do that:
a="foo:bar"
b=${a%:*}
c=${a##*:}
you will get:
b = foo
c = bar
Here is a clean 3-liner:
in="foo#bar;bizz#buzz;fizz#buzz;buzz#woof"
IFS=';' list=($in)
for item in "${list[#]}"; do echo $item; done
where IFS delimit words based on the separator and () is used to create an array. Then [#] is used to return each item as a separate word.
If you've any code after that, you also need to restore $IFS, e.g. unset IFS.
The following Bash/zsh function splits its first argument on the delimiter given by the second argument:
split() {
local string="$1"
local delimiter="$2"
if [ -n "$string" ]; then
local part
while read -d "$delimiter" part; do
echo $part
done <<< "$string"
echo $part
fi
}
For instance, the command
$ split 'a;b;c' ';'
yields
a
b
c
This output may, for instance, be piped to other commands. Example:
$ split 'a;b;c' ';' | cat -n
1 a
2 b
3 c
Compared to the other solutions given, this one has the following advantages:
IFS is not overriden: Due to dynamic scoping of even local variables, overriding IFS over a loop causes the new value to leak into function calls performed from within the loop.
Arrays are not used: Reading a string into an array using read requires the flag -a in Bash and -A in zsh.
If desired, the function may be put into a script as follows:
#!/usr/bin/env bash
split() {
# ...
}
split "$#"
you can apply awk to many situations
echo "bla#some.com;john#home.com"|awk -F';' '{printf "%s\n%s\n", $1, $2}'
also you can use this
echo "bla#some.com;john#home.com"|awk -F';' '{print $1,$2}' OFS="\n"
There is a simple and smart way like this:
echo "add:sfff" | xargs -d: -i echo {}
But you must use gnu xargs, BSD xargs cant support -d delim. If you use apple mac like me. You can install gnu xargs :
brew install findutils
then
echo "add:sfff" | gxargs -d: -i echo {}
There are some cool answers here (errator esp.), but for something analogous to split in other languages -- which is what I took the original question to mean -- I settled on this:
IN="bla#some.com;john#home.com"
declare -a a="(${IN//;/ })";
Now ${a[0]}, ${a[1]}, etc, are as you would expect. Use ${#a[*]} for number of terms. Or to iterate, of course:
for i in ${a[*]}; do echo $i; done
IMPORTANT NOTE:
This works in cases where there are no spaces to worry about, which solved my problem, but may not solve yours. Go with the $IFS solution(s) in that case.
So many answers and so many complexities. Try out a simpler solution:
echo "string1, string2" | tr , "\n"
tr (read, translate) replaces the first argument with the second argument in the input.
So tr , "\n" replace the comma with new line character in the input and it becomes:
string1
string2
If no space, Why not this?
IN="bla#some.com;john#home.com"
arr=(`echo $IN | tr ';' ' '`)
echo ${arr[0]}
echo ${arr[1]}
This is the simplest way to do it.
spo='one;two;three'
OIFS=$IFS
IFS=';'
spo_array=($spo)
IFS=$OIFS
echo ${spo_array[*]}
Apart from the fantastic answers that were already provided, if it is just a matter of printing out the data you may consider using awk:
awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "$IN"
This sets the field separator to ;, so that it can loop through the fields with a for loop and print accordingly.
Test
$ IN="bla#some.com;john#home.com"
$ awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "$IN"
> [bla#some.com]
> [john#home.com]
With another input:
$ awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "a;b;c d;e_;f"
> [a]
> [b]
> [c d]
> [e_]
> [f]
IN="bla#some.com;john#home.com"
IFS=';'
read -a IN_arr <<< "${IN}"
for entry in "${IN_arr[#]}"
do
echo $entry
done
Output
bla#some.com
john#home.com
System : Ubuntu 12.04.1
Use the set built-in to load up the $# array:
IN="bla#some.com;john#home.com"
IFS=';'; set $IN; IFS=$' \t\n'
Then, let the party begin:
echo $#
for a; do echo $a; done
ADDR1=$1 ADDR2=$2
Two bourne-ish alternatives where neither require bash arrays:
Case 1: Keep it nice and simple: Use a NewLine as the Record-Separator... eg.
IN="bla#some.com
john#home.com"
while read i; do
# process "$i" ... eg.
echo "[email:$i]"
done <<< "$IN"
Note: in this first case no sub-process is forked to assist with list manipulation.
Idea: Maybe it is worth using NL extensively internally, and only converting to a different RS when generating the final result externally.
Case 2: Using a ";" as a record separator... eg.
NL="
" IRS=";" ORS=";"
conv_IRS() {
exec tr "$1" "$NL"
}
conv_ORS() {
exec tr "$NL" "$1"
}
IN="bla#some.com;john#home.com"
IN="$(conv_IRS ";" <<< "$IN")"
while read i; do
# process "$i" ... eg.
echo -n "[email:$i]$ORS"
done <<< "$IN"
In both cases a sub-list can be composed within the loop is persistent after the loop has completed. This is useful when manipulating lists in memory, instead storing lists in files. {p.s. keep calm and carry on B-) }
In Android shell, most of the proposed methods just do not work:
$ IFS=':' read -ra ADDR <<<"$PATH"
/system/bin/sh: can't create temporary file /sqlite_stmt_journals/mksh.EbNoR10629: No such file or directory
What does work is:
$ for i in ${PATH//:/ }; do echo $i; done
/sbin
/vendor/bin
/system/sbin
/system/bin
/system/xbin
where // means global replacement.
IN='bla#some.com;john#home.com;Charlie Brown <cbrown#acme.com;!"#$%&/()[]{}*? are no problem;simple is beautiful :-)'
set -f
oldifs="$IFS"
IFS=';'; arrayIN=($IN)
IFS="$oldifs"
for i in "${arrayIN[#]}"; do
echo "$i"
done
set +f
Output:
bla#some.com
john#home.com
Charlie Brown <cbrown#acme.com
!"#$%&/()[]{}*? are no problem
simple is beautiful :-)
Explanation: Simple assignment using parenthesis () converts semicolon separated list into an array provided you have correct IFS while doing that. Standard FOR loop handles individual items in that array as usual.
Notice that the list given for IN variable must be "hard" quoted, that is, with single ticks.
IFS must be saved and restored since Bash does not treat an assignment the same way as a command. An alternate workaround is to wrap the assignment inside a function and call that function with a modified IFS. In that case separate saving/restoring of IFS is not needed. Thanks for "Bize" for pointing that out.
Here's my answer!
DELIMITER_VAL='='
read -d '' F_ABOUT_DISTRO_R <<"EOF"
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.4 LTS"
NAME="Ubuntu"
VERSION="14.04.4 LTS, Trusty Tahr"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 14.04.4 LTS"
VERSION_ID="14.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
EOF
SPLIT_NOW=$(awk -F$DELIMITER_VAL '{for(i=1;i<=NF;i++){printf "%s\n", $i}}' <<<"${F_ABOUT_DISTRO_R}")
while read -r line; do
SPLIT+=("$line")
done <<< "$SPLIT_NOW"
for i in "${SPLIT[#]}"; do
echo "$i"
done
Why this approach is "the best" for me?
Because of two reasons:
You do not need to escape the delimiter;
You will not have problem with blank spaces. The value will be properly separated in the array.
A one-liner to split a string separated by ';' into an array is:
IN="bla#some.com;john#home.com"
ADDRS=( $(IFS=";" echo "$IN") )
echo ${ADDRS[0]}
echo ${ADDRS[1]}
This only sets IFS in a subshell, so you don't have to worry about saving and restoring its value.

How can I save only a substring of file names from a directory without the file extension?

I have a directory that I'm reading from and I want to save only the date representation as a string.
I am close to getting it , although I know there is probably an easier way. Here is what I have so far:
#files are in the format of "THIS_20200420.csv" so I want only "20200420"
declare -a arr
declare -a arr2
FILES=test2/*.csv
for file in $FILES
do
arr=(${arr[*]} "${file##*/}")
done
for i in "${arr[#]}"
do
arr2+=$(echo $i | cut -c6-13)
done
for item in "${arr2[#]}"
do
echo $item
done
the output shows the array only having one element which is all the strings concatenated:
20200110202001202020021920200220202004202020042220200110202001202020021920200220202004202020042220200219202002202020042020200422
Im bashing my head against my computer at this point.
arr=(
"THIS_20200420.csv"
"THIS_20200421.csv"
"THIS_20200422.csv"
"THIS_20200423.csv"
"THIS_20200424.csv"
"THIS_20200425.csv"
"THIS_20200426.csv"
"THIS_20200427.csv"
"THIS_20200428.csv"
"THIS_20200429.csv"
"THIS_20200430.csv" )
arr=( ${arr[#]//*_} )
arr=( ${arr[#]//.*} )
echo "arr: ${arr[#]}"
Explanation:
arr=( ${arr[#]//*_} ) will match all char up to '_' for each element, and replace them with empty string.
arr=( ${arr[#]//.*} ) will match all char after '.' for each element, and replace them with empty string.
For more information on parameter expansion, a good reference is TLDP's guide on parameter expansion.
Try this
declare -a arrayname=($(ls -1 test2/*.csv | grep -o '[0-9]*'))
Demo:
$ls -1 *csv
THIS_20200420.csv
THIS_20200421.csv
THIS_20200422.csv
THIS_20200423.csv
THIS_20200424.csv
THIS_20200425.csv
THIS_20200426.csv
THIS_20200427.csv
THIS_20200428.csv
THIS_20200429.csv
THIS_20200430.csv
$declare -a arrayname=($(ls -1 *csv | grep -o '[0-9]*'))
$echo ${arrayname[#]}
20200420 20200421 20200422 20200423 20200424 20200425 20200426 20200427 20200428 20200429 20200430
$echo ${arrayname[2]}
20200422
$
You could achieve this using a loop with awk:
$ for file in *.csv; do echo $file | awk -F '[^[:alnum:]]' '{print $2}'; done
The -F '[^[:alnum:]]' tells awk to use non alphanumeric characters as the delimiter.
Another way to do this is to use bash shell parameter expansion to echo only the part of the filename you want. This obviously only works if your filenames have consistent formatting:
$ for file in *.csv; do echo "${file:5:8}"; done
I thought it would be nice to use bash parameter expansion to strip the unwanted prefix and suffix but you can't have nested expansion (afaict) so this is the best I could come up with:
$ for file in *.csv; do echo "$(tmp=${file%.csv}; echo ${tmp#THIS_})"; done
Meet Cut! A good friend of Linux Users
for file in ./*.csv; do echo $file | cut -d "_" -f 2 | cut -d "." -f 1 ; done
This one line should do the trick!
Example:
Use an array for the files assignment and parameter expansion.
#!/usr/bin/env bash
shopt -s nullglob
##: Save the files ending in *.csv in an array
## so it expands properly, variable assignment does not expand the glob *
files=(test2/*.csv)
##: Remain only the files that end with .csv without the pathname, longest match
files=("${files[#]##*/}")
##: Remain only the file names without the .csv extention
files=("${files[#]%.csv}")
##: Remain only the filename after the _ from the beginning, shortest match.
files=("${files[#]#*_}")
printf '%s ' "${files[#]}"

How to perform a for loop on each character in a string in Bash?

I have a variable like this:
words="这是一条狗。"
I want to make a for loop on each of the characters, one at a time, e.g. first character="这", then character="是", character="一", etc.
The only way I know is to output each character to separate line in a file, then use while read line, but this seems very inefficient.
How can I process each character in a string through a for loop?
You can use a C-style for loop:
foo=string
for (( i=0; i<${#foo}; i++ )); do
echo "${foo:$i:1}"
done
${#foo} expands to the length of foo. ${foo:$i:1} expands to the substring starting at position $i of length 1.
With sed on dash shell of LANG=en_US.UTF-8, I got the followings working right:
$ echo "你好嗎 新年好。全型句號" | sed -e 's/\(.\)/\1\n/g'
你
好
嗎
新
年
好
。
全
型
句
號
and
$ echo "Hello world" | sed -e 's/\(.\)/\1\n/g'
H
e
l
l
o
w
o
r
l
d
Thus, output can be looped with while read ... ; do ... ; done
edited for sample text translate into English:
"你好嗎 新年好。全型句號" is zh_TW.UTF-8 encoding for:
"你好嗎" = How are you[ doing]
" " = a normal space character
"新年好" = Happy new year
"。全型空格" = a double-byte-sized full-stop followed by text description
${#var} returns the length of var
${var:pos:N} returns N characters from pos onwards
Examples:
$ words="abc"
$ echo ${words:0:1}
a
$ echo ${words:1:1}
b
$ echo ${words:2:1}
c
so it is easy to iterate.
another way:
$ grep -o . <<< "abc"
a
b
c
or
$ grep -o . <<< "abc" | while read letter; do echo "my letter is $letter" ; done
my letter is a
my letter is b
my letter is c
I'm surprised no one has mentioned the obvious bash solution utilizing only while and read.
while read -n1 character; do
echo "$character"
done < <(echo -n "$words")
Note the use of echo -n to avoid the extraneous newline at the end. printf is another good option and may be more suitable for your particular needs. If you want to ignore whitespace then replace "$words" with "${words// /}".
Another option is fold. Please note however that it should never be fed into a for loop. Rather, use a while loop as follows:
while read char; do
echo "$char"
done < <(fold -w1 <<<"$words")
The primary benefit to using the external fold command (of the coreutils package) would be brevity. You can feed it's output to another command such as xargs (part of the findutils package) as follows:
fold -w1 <<<"$words" | xargs -I% -- echo %
You'll want to replace the echo command used in the example above with the command you'd like to run against each character. Note that xargs will discard whitespace by default. You can use -d '\n' to disable that behavior.
Internationalization
I just tested fold with some of the Asian characters and realized it doesn't have Unicode support. So while it is fine for ASCII needs, it won't work for everyone. In that case there are some alternatives.
I'd probably replace fold -w1 with an awk array:
awk 'BEGIN{FS=""} {for (i=1;i<=NF;i++) print $i}'
Or the grep command mentioned in another answer:
grep -o .
Performance
FYI, I benchmarked the 3 aforementioned options. The first two were fast, nearly tying, with the fold loop slightly faster than the while loop. Unsurprisingly xargs was the slowest... 75x slower.
Here is the (abbreviated) test code:
words=$(python -c 'from string import ascii_letters as l; print(l * 100)')
testrunner(){
for test in test_while_loop test_fold_loop test_fold_xargs test_awk_loop test_grep_loop; do
echo "$test"
(time for (( i=1; i<$((${1:-100} + 1)); i++ )); do "$test"; done >/dev/null) 2>&1 | sed '/^$/d'
echo
done
}
testrunner 100
Here are the results:
test_while_loop
real 0m5.821s
user 0m5.322s
sys 0m0.526s
test_fold_loop
real 0m6.051s
user 0m5.260s
sys 0m0.822s
test_fold_xargs
real 7m13.444s
user 0m24.531s
sys 6m44.704s
test_awk_loop
real 0m6.507s
user 0m5.858s
sys 0m0.788s
test_grep_loop
real 0m6.179s
user 0m5.409s
sys 0m0.921s
I believe there is still no ideal solution that would correctly preserve all whitespace characters and is fast enough, so I'll post my answer. Using ${foo:$i:1} works, but is very slow, which is especially noticeable with large strings, as I will show below.
My idea is an expansion of a method proposed by Six, which involves read -n1, with some changes to keep all characters and work correctly for any string:
while IFS='' read -r -d '' -n 1 char; do
# do something with $char
done < <(printf %s "$string")
How it works:
IFS='' - Redefining internal field separator to empty string prevents stripping of spaces and tabs. Doing it on a same line as read means that it will not affect other shell commands.
-r - Means "raw", which prevents read from treating \ at the end of the line as a special line concatenation character.
-d '' - Passing empty string as a delimiter prevents read from stripping newline characters. Actually means that null byte is used as a delimiter. -d '' is equal to -d $'\0'.
-n 1 - Means that one character at a time will be read.
printf %s "$string" - Using printf instead of echo -n is safer, because echo treats -n and -e as options. If you pass "-e" as a string, echo will not print anything.
< <(...) - Passing string to the loop using process substitution. If you use here-strings instead (done <<< "$string"), an extra newline character is appended at the end. Also, passing string through a pipe (printf %s "$string" | while ...) would make the loop run in a subshell, which means all variable operations are local within the loop.
Now, let's test the performance with a huge string.
I used the following file as a source:
https://www.kernel.org/doc/Documentation/kbuild/makefiles.txt
The following script was called through time command:
#!/bin/bash
# Saving contents of the file into a variable named `string'.
# This is for test purposes only. In real code, you should use
# `done < "filename"' construct if you wish to read from a file.
# Using `string="$(cat makefiles.txt)"' would strip trailing newlines.
IFS='' read -r -d '' string < makefiles.txt
while IFS='' read -r -d '' -n 1 char; do
# remake the string by adding one character at a time
new_string+="$char"
done < <(printf %s "$string")
# confirm that new string is identical to the original
diff -u makefiles.txt <(printf %s "$new_string")
And the result is:
$ time ./test.sh
real 0m1.161s
user 0m1.036s
sys 0m0.116s
As we can see, it is quite fast.
Next, I replaced the loop with one that uses parameter expansion:
for (( i=0 ; i<${#string}; i++ )); do
new_string+="${string:$i:1}"
done
The output shows exactly how bad the performance loss is:
$ time ./test.sh
real 2m38.540s
user 2m34.916s
sys 0m3.576s
The exact numbers may very on different systems, but the overall picture should be similar.
I've only tested this with ascii strings, but you could do something like:
while test -n "$words"; do
c=${words:0:1} # Get the first character
echo character is "'$c'"
words=${words:1} # trim the first character
done
It is also possible to split the string into a character array using fold and then iterate over this array:
for char in `echo "这是一条狗。" | fold -w1`; do
echo $char
done
The C style loop in #chepner's answer is in the shell function update_terminal_cwd, and the grep -o . solution is clever, but I was surprised not to see a solution using seq. Here's mine:
read word
for i in $(seq 1 ${#word}); do
echo "${word:i-1:1}"
done
#!/bin/bash
word=$(echo 'Your Message' |fold -w 1)
for letter in ${word} ; do echo "${letter} is a letter"; done
Here is the output:
Y is a letter
o is a letter
u is a letter
r is a letter
M is a letter
e is a letter
s is a letter
s is a letter
a is a letter
g is a letter
e is a letter
To iterate ASCII characters on a POSIX-compliant shell, you can avoid external tools by using the Parameter Expansions:
#!/bin/sh
str="Hello World!"
while [ ${#str} -gt 0 ]; do
next=${str#?}
echo "${str%$next}"
str=$next
done
or
str="Hello World!"
while [ -n "$str" ]; do
next=${str#?}
echo "${str%$next}"
str=$next
done
sed works with unicode
IFS=$'\n'
for z in $(sed 's/./&\n/g' <(printf '你好嗎')); do
echo hello: "$z"
done
outputs
hello: 你
hello: 好
hello: 嗎
Another approach, if you don't care about whitespace being ignored:
for char in $(sed -E s/'(.)'/'\1 '/g <<<"$your_string"); do
# Handle $char here
done
Another way is:
Characters="TESTING"
index=1
while [ $index -le ${#Characters} ]
do
echo ${Characters} | cut -c${index}-${index}
index=$(expr $index + 1)
done
fold and while read are great for the job as shown in some answers here. Contrary to those answers, I think it's much more intuitive to pipe in the order of execution:
echo "asdfg" | fold -w 1 | while read c; do
echo -n "$c "
done
Outputs: a s d f g
I share my solution:
read word
for char in $(grep -o . <<<"$word") ; do
echo $char
done
TEXT="hello world"
for i in {1..${#TEXT}}; do
echo ${TEXT[i]}
done
where {1..N} is an inclusive range
${#TEXT} is a number of letters in a string
${TEXT[i]} - you can get char from string like an item from an array

Is it possible to do a grep with keywords stored in the array?

Is it possible to do a grep with keywords stored in the array.
Here is the possible code snippet; how can I correct it?
args=("key1" "key2" "key3")
cat file_name |while read line
echo $line | grep -q -w ${args[c]}
done
At the moment, I can search for only one keyword. I would like to search for all the keywords which is stored in args array.
args=("key1" "key2" "key3")
pat=$(echo ${args[#]}|tr " " "|")
grep -Eow "$pat" file
Or with the shell
args=("key1" "key2" "key3")
while read -r line
do
for i in ${args[#]}
do
case "$line" in
*"$i"*) echo "found: $line";;
esac
done
done <"file"
You can use some bash expansion magic to prefix each element with -e and pass each element of the array as a separate pattern. This may avoid some precedence issues where your patterns may interact badly with the | operator:
$ grep ${args[#]/#/-e } file_name
The downside to this is that you cannot have any spaces in your patterns because that will split the arguments to grep. You cannot put quotes around the above expansion, otherwise you get "-e pattern" as a single argument to grep.
This is one way:
args=("key1" "key2" "key3")
keys=${args[#]/%/\\|} # result: key1\| key2\| key3\|
keys=${keys// } # result: key1\|key2\|key3\|
grep "${keys}" file_name
Edit:
Based on Pavel Shved's suggestion:
( IFS="|"; keys="${args[*]}"; keys="${keys//|/\\|}"; grep "${keys}" file_name )
The first version as a one-liner:
keys=${args[#]/%/\\|}; keys=${keys// }; grep "${keys}" file_name
Edit2:
Even better than the version using IFS:
printf -v keys "%s\\|" "${args[#]}"; grep "${keys}" file_name
I tend to use process substitution for everything. It's convenient when combined with grep's -f option:
Obtain patterns from FILE, one per line.
(Depending on the context, you might even want to combine that with -F, -x or -w, etc., for awesome effects.)
So:
#! /usr/bin/env bash
t=(8 12 24)
seq 30 | grep -f <(printf '%s\n' "${t[#]}")
and I get:
8
12
18
24
28
I basically write a pseudo-file with one item of the array per line, and then tell grep to use each of these lines as a pattern.
The command
( IFS="|" ; grep --perl-regexp "${args[*]}" ) <file_name
searches the file for each keyword in an array. It does so by constructing regular expression word1|word2|word3 that matches any word from the alternatives given (in perl mode).
If I there is a way to join array elements into a string, delimiting them with sequence of characters (namely, \|), it could be done without perl regexp.
perhaps something like this;
cat file_name |while read line
for arg in ${args[#]}
do
echo $line | grep -q -w $arg}
done
done
not tested!

Resources