To parse colon-delimited fields I can use read with a custom IFS:
$ echo 'foo.c:41:switch (color) {' | { IFS=: read file line text && echo "$file | $line | $text"; }
foo.c | 41 | switch (color) {
If the last field contains colons, no problem, the colons are retained.
$ echo 'foo.c:42:case RED: //alert' | { IFS=: read file line text && echo "$file | $line | $text"; }
foo.c | 42 | case RED: //alert
A trailing delimiter is also retained...
$ echo 'foo.c:42:case RED: //alert:' | { IFS=: read file line text && echo "$file | $line | $text"; }
foo.c | 42 | case RED: //alert:
...Unless it's the only extra delimiter. Then it's stripped. Wait, what?
$ echo 'foo.c:42:case RED:' | { IFS=: read file line text && echo "$file | $line | $text"; }
foo.c | 42 | case RED
Bash, ksh93, and dash all do this, so I'm guessing it is POSIX standard behavior.
Why does it happen?
What's the best alternative?
I want to parse the strings above into three variables and I don't want to mangle any text in the third field. I had thought read was the way to go but now I'm reconsidering.
Yes, that's standard behaviour (see the read specification and Field Splitting). A few shells (ash-based including dash, pdksh-based, zsh, yash at least) used not to do it, but except for zsh (when not in POSIX mode), busybox sh, most of them have been updated for POSIX compliance.
That's the same for:
$ var='a:b:c:' IFS=:
$ set -- $var; echo "$#"
3
(see how the POSIX specification for read actually defers to the Field Splitting mechanism where a:b:c: is split into 3 fields, and so with IFS=: read -r a b c, there are as many fields as variables).
The rationale is that in ksh (on which the POSIX spec is based) $IFS (initially in the Bourne shell the internal field separator) became a field delimiter, I think so any list of elements (not containing the delimiter) could be represented.
When $IFS is a separator, one can't represent a list of one empty element ("" is split into a list of 0 element, ":" into a list of two empty elements¹). When it's a delimiter, you can express a list of zero element with "", or one empty element with ":", or two empty elements with "::".
It's a bit unfortunate as one of the most common usages of $IFS is to split $PATH. And a $PATH like /bin:/usr/bin: is meant to be split into "/bin", "/usr/bin", "", not just "/bin" and "/usr/bin".
Now, with POSIX shells (but not all shells are compliant in that regard), for word splitting upon parameter expansion, that can be worked around with:
IFS=:; set -o noglob
for dir in $PATH""; do
something with "${dir:-.}"
done
That trailing "" makes sure that if $PATH ends in a trailing :, an extra empty element is added. And also that an empty $PATH is treated as one empty element as it should be.
That approach can't be used for read though.
Short of switching to zsh, there's no easy work around other than inserting an extra : and remove it afterwards like:
echo a:b:c: | sed 's/:/::/2' | { IFS=: read -r x y z; z=${z#:}; echo "$z"; }
Or (less portable):
echo a:b:c: | paste -d: - /dev/null | { IFS=: read -r x y z; z=${z%:}; echo "$z"; }
I've also added the -r which you generally want when using read.
Most likely here you'd want to use a proper text processing utility like sed/awk/perl instead of writing convoluted and probably inefficient code around read which has not been designed for that.
¹ Though in the Bourne shell, that was still split into zero elements as there was no distinction between IFS-whitespace and IFS-non-whitespace characters there, something that was also added by ksh
One "feature" of read is that it will strip leading and trailing whitespace separators in the variables it populates - it is explained in much more detail at the linked answer. This enables beginners to have read do what they expect when doing for example read first rest <<< ' foo bar ' (note the extra spaces).
The take-away? It is hard to do accurate text processing using Bash and shell tools. If you want full control it's probably better to use a "stricter" language like for example Python, where split() will do what you want, but where you might have to dig much deeper into string handling to explicitly remove newline separators or handle encoding.
Related
I'm writing a shell script that should be somewhat secure, i.e., does not pass secure data through parameters of commands and preferably does not use temporary files. How can I pass a variable to the standard input of a command?
Or, if it's not possible, how can I correctly use temporary files for such a task?
Passing a value to standard input in Bash is as simple as:
your-command <<< "$your_variable"
Always make sure you put quotes around variable expressions!
Be cautious, that this will probably work only in bash and will not work in sh.
Simple, but error-prone: using echo
Something as simple as this will do the trick:
echo "$blah" | my_cmd
Do note that this may not work correctly if $blah contains -n, -e, -E etc; or if it contains backslashes (bash's copy of echo preserves literal backslashes in absence of -e by default, but will treat them as escape sequences and replace them with corresponding characters even without -e if optional XSI extensions are enabled).
More sophisticated approach: using printf
printf '%s\n' "$blah" | my_cmd
This does not have the disadvantages listed above: all possible C strings (strings not containing NULs) are printed unchanged.
(cat <<END
$passwd
END
) | command
The cat is not really needed, but it helps to structure the code better and allows you to use more commands in parentheses as input to your command.
Note that the 'echo "$var" | command operations mean that standard input is limited to the line(s) echoed. If you also want the terminal to be connected, then you'll need to be fancier:
{ echo "$var"; cat - ; } | command
( echo "$var"; cat - ) | command
This means that the first line(s) will be the contents of $var but the rest will come from cat reading its standard input. If the command does not do anything too fancy (try to turn on command line editing, or run like vim does) then it will be fine. Otherwise, you need to get really fancy - I think expect or one of its derivatives is likely to be appropriate.
The command line notations are practically identical - but the second semi-colon is necessary with the braces whereas it is not with parentheses.
This robust and portable way has already appeared in comments. It should be a standalone answer.
printf '%s' "$var" | my_cmd
or
printf '%s\n' "$var" | my_cmd
Notes:
It's better than echo, reasons are here: Why is printf better than echo?
printf "$var" is wrong. The first argument is format where various sequences like %s or \n are interpreted. To pass the variable right, it must not be interpreted as format.
Usually variables don't contain trailing newlines. The former command (with %s) passes the variable as it is. However tools that work with text may ignore or complain about an incomplete line (see Why should text files end with a newline?). So you may want the latter command (with %s\n) which appends a newline character to the content of the variable. Non-obvious facts:
Here string in Bash (<<<"$var" my_cmd) does append a newline.
Any method that appends a newline results in non-empty stdin of my_cmd, even if the variable is empty or undefined.
I liked Martin's answer, but it has some problems depending on what is in the variable. This
your-command <<< """$your_variable"""
is better if you variable contains " or !.
As per Martin's answer, there is a Bash feature called Here Strings (which itself is a variant of the more widely supported Here Documents feature):
3.6.7 Here Strings
A variant of here documents, the format is:
<<< word
The word is expanded and supplied to the command on its standard
input.
Note that Here Strings would appear to be Bash-only, so, for improved portability, you'd probably be better off with the original Here Documents feature, as per PoltoS's answer:
( cat <<EOF
$variable
EOF
) | cmd
Or, a simpler variant of the above:
(cmd <<EOF
$variable
EOF
)
You can omit ( and ), unless you want to have this redirected further into other commands.
Try this:
echo "$variable" | command
If you came here from a duplicate, you are probably a beginner who tried to do something like
"$variable" >file
or
"$variable" | wc -l
where you obviously meant something like
echo "$variable" >file
echo "$variable" | wc -l
(Real beginners also forget the quotes; usually use quotes unless you have a specific reason to omit them, at least until you understand quoting.)
I have a variable with some lines in it and I would like to pad it with a number of newlines defined in another variable. However it seems that the subshell may be stripping the trailing newlines. I cannot just use '\n' with echo -e as the lines may already contain escaped chars which need to be printed as is.
I have found I can print an arbitrary number of newlines using this.
n=5
yes '' | sed -n "1,${n}p;${n}q"
But if I run this in a subshell to store it in the variable, the subshell appears to strip the trailing newlines.
I can approximate the functionality but it's clumsy and due to the way I am using it I would much rather be able to just call echo "$var" or even use $var itself for things like string concatenation. This approximation runs into the same issue with subshells as soon as the last (filler) line of the variable is removed.
This is my approximation
n=5
var="test"
#I could also just set n=6
cmd="1,$((n+1))p;$((n+1))q"
var="$var$(yes '' | sed -n $cmd; echo .)"
#Now I can use it with
echo "$var" | head -n -1
Essentially I need a good way of appending a number of newlines to a variable which can then be printed with echo.
I would like to keep this POSIX compliant if at all possible but at this stage a bash solution would also be acceptable. I am also using this as part of a tool for which I have set a challenge of minimizing line and character count while maintaining readability. But I can work that out once I have a workable solution
Command substitutions with either $( ) or backticks will trim trailing newlines. So don't use them; use the shell's built-in string manipulation:
n=5
var="test"
while [ "$n" -gt 0 ]; do
var="$var
"
n=$((n-1))
done
Note that there must be nothing after the var="$var (before the newline), and nothing before the " on the next line (no indentation!).
A sequence of n newlines:
printf -v spaces "%*s" $n ""
newlines=${spaces// /$'\n'}
I am using the bash shell and want to execute a command that takes filenames as arguments; say the cat command. I need to provide the arguments sorted by modification time (oldest first) and unfortunately the filenames can contain spaces and a few other difficult characters such as "-", "[", "]". The files to be provided as arguments are all the *.txt files in my directory. I cannot find the right syntax. Here are my efforts.
Of course, cat *.txt fails; it does not give the desired order of the arguments.
cat `ls -rt *.txt`
The `ls -rt *.txt` gives the desired order, but now the blanks in the filenames cause confusion; they are seen as filename separators by the cat command.
cat `ls -brt *.txt`
I tried -b to escape non-graphic characters, but the blanks are still seen as filename separators by cat.
cat `ls -Qrt *.txt`
I tried -Q to put entry names in double quotes.
cat `ls -rt --quoting-style=escape *.txt`
I tried this and other variants of the quoting style.
Nothing that I've tried works. Either the blanks are treated as filename separators by cat, or the entire list of filenames is treated as one (invalid) argument.
Please advise!
Using --quoting-style is a good start. The trick is in parsing the quoted file names. Backticks are simply not up to the job. We're going to have to be super explicit about parsing the escape sequences.
First, we need to pick a quoting style. Let's see how the various algorithms handle a crazy file name like "foo 'bar'\tbaz\nquux". That's a file name containing actual single and double quotes, plus a space, tab, and newline to boot. If you're wondering: yes, these are all legal, albeit unusual.
$ for style in literal shell shell-always shell-escape shell-escape-always c c-maybe escape locale clocale; do printf '%-20s <%s>\n' "$style" "$(ls --quoting-style="$style" '"foo '\''bar'\'''$'\t''baz '$'\n''quux"')"; done
literal <"foo 'bar' baz
quux">
shell <'"foo '\''bar'\'' baz
quux"'>
shell-always <'"foo '\''bar'\'' baz
quux"'>
shell-escape <'"foo '\''bar'\'''$'\t''baz '$'\n''quux"'>
shell-escape-always <'"foo '\''bar'\'''$'\t''baz '$'\n''quux"'>
c <"\"foo 'bar'\tbaz \nquux\"">
c-maybe <"\"foo 'bar'\tbaz \nquux\"">
escape <"foo\ 'bar'\tbaz\ \nquux">
locale <‘"foo 'bar'\tbaz \nquux"’>
clocale <‘"foo 'bar'\tbaz \nquux"’>
The ones that actually span two lines are no good, so literal, shell, and shell-always are out. Smart quotes aren't helpful, so locale and clocale are out. Here's what's left:
shell-escape <'"foo '\''bar'\'''$'\t''baz '$'\n''quux"'>
shell-escape-always <'"foo '\''bar'\'''$'\t''baz '$'\n''quux"'>
c <"\"foo 'bar'\tbaz \nquux\"">
c-maybe <"\"foo 'bar'\tbaz \nquux\"">
escape <"foo\ 'bar'\tbaz\ \nquux">
Which of these can we work with? Well, we're in a shell script. Let's use shell-escape.
There will be one file name per line. We can use a while read loop to read a line at a time. We'll also need IFS= and -r to disable any special character handling. A standard line processing loop looks like this:
while IFS= read -r line; do ... done < file
That "file" at the end is supposed to be a file name, but we don't want to read from a file, we want to read from the ls command. Let's use <(...) process substitution to swap in a command where a file name is expected.
while IFS= read -r line; do
# process each line
done < <(ls -rt --quoting-style=shell-escape *.txt)
Now we need to convert each line with all the quoted characters into a usable file name. We can use eval to have the shell interpret all the escape sequences. (I almost always warn against using eval but this is a rare situation where it's okay.)
while IFS= read -r line; do
eval "file=$line"
done < <(ls -rt --quoting-style=shell-escape *.txt)
If you wanted to work one file at a time we'd be done. But you want to pass all the file names at once to another command. To get to the finish line, the last step is to build an array with all the file names.
files=()
while IFS= read -r line; do
eval "files+=($line)"
done < <(ls -rt --quoting-style=shell-escape *.txt)
cat "${files[#]}"
There we go. It's not pretty. It's not elegant. But it's safe.
Does this do what you want?
for i in $(ls -rt *.txt); do echo "FILE: $i"; cat "$i"; done
I am new to shell script. I want to iterate a directory for the below specific pattern.
Ad_sf_03041500000.dat
SF_AD_0304150.DEL
SF_AD_0404141.EXP
Number of digits should be exactly match with this pattern.
I am using KSH shell script. Could you please help me to iterate only those files in for loop.
The patterns you are looking for are
Ad_sf_{11}([[:digit:]]).dat
SF_AD_{7}([[:digit:]]).DEL
SF_AD_{7}([[:digit:]]).EXP
Note that the {n}(...) pattern, to match exactly n occurrences of the following pattern, is an extension unique to ksh (as far as I know, not even zsh provides an equivalent).
To iterate over matching files, you can use
for f in Ad_sf_{11}(\d).dat SF_AD_{7}(\d).#(DEL|EXP); do
where I've use the "pick one" operator #(...) to combine the two shorter patterns into a single pattern, and I've used \d, which ksh supports as a shorter version of [[:digit:]] when inside parentheses.
Automatic wildcard generation method. Print the filenames with leading text and line numbers...
POSIX shell:
2> /dev/null find \
$(echo Ad_sf_03041500000.dat SF_AD_0304150.DEL SF_AD_0404141.EXP |
sed 's/[0-9]/[0-9]/g' ) |
while read f ; do
echo "Here's $f";
done | nl
ksh (with a spot borrowed from Chepner):
set - Ad_sf_03041500000.dat SF_AD_0304150.DEL SF_AD_0404141.EXP
for f in ${*//[0-9]/[0-9]} ; do [ -f "$f" ] || continue
echo "Here's $f";
done | nl
Output of either method:
1 Here's Ad_sf_03041500000.dat
2 Here's SF_AD_0304150.DEL
3 Here's SF_AD_0404141.EXP
If the line numbers aren't wanted, omit the | nl. echo can be replaced with whatever command needs to be run on the files.
How the POSIX code works. The OP spec is simple enough to churn out the correct wildcard with a little tweaking. Example:
echo Ad_sf_03041500000.dat SF_AD_0304150.DEL SF_AD_0404141.EXP |
sed 's/[0-9]/[0-9]/g'
Which outputs exactly the patterns needed (line feeds added for clarity):
Ad_sf_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].dat
SF_AD_[0-9][0-9][0-9][0-9][0-9][0-9][0-9].DEL
SF_AD_[0-9][0-9][0-9][0-9][0-9][0-9][0-9].EXP
The patterns above go to find, which prints only the matching filenames, (not the pattern itself when there are no files), then the filenames go to a while loop.
(The ksh variant is the same method but uses pattern substitution, set, and test -f in place of sed and find.)
I have the STRING as given below. There is no specific separator between each key. The only way is to identify the keys is using the keyword "key_1" or "key_2" etc..
All keys begin with "key_" and can never appear in the value of another:
STRING="key_1=mislanious_string1 key_2=miscellaneous_string2"
I want the output as below.
echo $STRING1 should print:
key_1=mislanious_string1
echo $STRING2 should print:
key_2=mislanious_string2
e.g:
If STRING="key_1=foobarzkey_2=bash" , then the output should look like , STRING1=key_1=foobarz and STRING2=key_2=bash.
There may be more keys like key_1 , key_2 , key_3 etc. Each key starts with "key_" and can never appear in the value of another:
How to this in UNIX bash shell?
Using grep -P (PCRE) to support multiple key-value pairs in input:
STRING="key_1=mislanious_string1key_2=miscellaneous_string2key_3=fookey_4=BASH"
grep -oP 'key_[^=]+=.*?(?=key_|$)' <<< "$STRING"
key_1=mislanious_string1
key_2=miscellaneous_string2
key_3=foo
key_4=BASH
To store them into BASH array you can use:
read -d '' -ra arr < <(grep -oP 'key_[^=]+=.*?(?=key_|$)' <<< "$STRING")
printf "%s\n" "${arr[#]}"
key_1=mislanious_string1
key_2=miscellaneous_string2
key_3=foo
key_4=BASH
declare -p arr
declare -a arr='([0]="key_1=mislanious_string1" [1]="key_2=miscellaneous_string2" [2]="key_3=foo" [3]="key_4=BASH")'
UPDATE:: Here is a pure BASH (non-gnu) way of splitting these strings. We first insert an invisible character before every occurrence of key_ string and then use that for splitting the string:
STRING="key_1=mislanious_string1key_2=miscellaneous_string2key_3=fookey_4=BASH"
c=$'\x06'
s="${STRING//key_/${c}key_}"
arr=()
while [[ "$s" =~ ${c}(key_[^=]+=[^${c}]+)(.*) ]]; do
arr+=( "${BASH_REMATCH[1]}" )
s="${BASH_REMATCH[2]}"
done
Then to test:
printf "<%s>\n" "${arr[#]}"
<key_1=mislanious_string1>
<key_2=miscellaneous_string2>
<key_3=foo>
<key_4=BASH>
I like anubhava's grep -oP solution best. Here's an awk solution:
STRING="key_15=foobarzkey_3=bash"
awk -v RS="key_" 'NR>1{split($0, a, /=/); print "STRING" a[1] "=" RS $0}' <<< "$STRING"
STRING15=key_15=foobarz
STRING3=key_3=bash
So, to create that output as shell variables
eval $(awk -v RS="key_" 'NR>1{split($0, a, /=/); print "STRING" a[1] "=" RS $0}' <<< "$STRING")
echo $STRING3 # => key_3=bash
echo $STRING15 # => key_15=foobarz
This answer originally didn't recognize keys not preceded by whitespace. This has been fixed. In its current form this answer provides value as a portable solution. If you disagree, please let me know.
The answers provided by Glenn Jackman and anubhava are helpful, but use GNU extensions not available on all platforms (grep -P, awk with a multi-char. RS value).
Here's a POSIX-compliant sed solution that should work on most platforms, using either bash, ksh, or zsh as the shell:
str='key_1=mislanious_string1 key_2=miscellaneous_string2key_3=last'
while read -r varDef; do
[[ -n $varDef ]] && typeset "$varDef"
done < <(sed 's/\(key_\([0-9]\{1,\}\)=\)/\'$'\n''string\2=\1/g' <<<"$str")
#'# Print the variables created ($string1, $string2, $string3).
typeset -p ${!string#}
Note that lowercase variable names (string1, ...) are used so as to prevent potential conflicts with environment variables.
sed is used to split the string into key-value tokens each on their own line, preceded by the desired target variable name and =, effectively outputting shell variable assignments; e.g., for key_1, the sed command passes out:
string1=key_1=mislanious_string1
The while loop then reads each output line and uses typeset to declare and assign the variable (note that typeset was chosen for ksh compatibility - while typeset also works in bash and zsh you'd typically use declare there); [[ -n $varDef ]] ignores the empty line that the sed output starts with.
Note: This solution trims trailing whitespace from values, consistent with the example in the question. This trimming happens due to use of read with the default $IFS value (internal field separators) - to preserve trailing whitespace, simply use IFS= read instead of just read.
Also note that use of process substitution to provide input (while ... <(sed ...)) (as opposed to a pipeline (sed ... | while ...) is required to ensure that the variables are defined in the current shell (rather than in a subshell, which would result in variables not visible to the current shell).
Some background info on what makes the above sed command POSIX-compliant:
POSIX only mandates basic regular expressions for sed, which takes away many features (e.g., quantifiers ? and +, alternation (|)) and makes escaping more cumbersome (e.g., ( and ) must be \-escaped).
POSIX sed also doesn't support escape sequences such as \n in replacement strings passed to s, so ANSI-C quoting is used to splice an \-escaped actual newline into the replacement string using $'\n'.
As an example of how useful the non-POSIX GNU sed extensions are, here's an equivalent command taking full advantage of GNU sed's features (extended regular expressions, support for \n), resulting in a shorter and more readable command:
sed -r 's/(key_([0-9]+)=)/\nstring\2=\1/g' <<<"$str"
Sometimes the simplest solution can be overlooked:
STRING="key_1=mislanious_string1key_2=miscellaneous_string2"
read STRING1 STRING2<<<${STRING//key_/ key_}
echo $STRING1
echo $STRING2