omit passing an empty quoted argument - bash

I have some variables in a bash script that may contain a file name or be unset. Their content should be passed as an additional argument to a program. But this leaves an empty argument when the variable is unset.
$ afile=/dev/null
$ anotherfile=/dev/null
$ unset empty
$ cat "$afile" "$empty" "$anotherfile"
cat: : No such file or directory
Without quotes, it works just fine as the additional argument is simply omitted. But as the variables may contain spaces, they have to be quoted here.
I understand that I could simply wrap the whole line in a test on emptiness.
if [ -z "$empty" ]; then
cat "$afile" "$anotherfile"
else
cat "$afile" "$empty" "$anotherfile"
fi
But one test for each variable would lead to a huge and convoluted decision tree.
Is there a more compact solution to this? Can bash made to omit a quoted empty variable?

You can use an alternate value parameter expansion (${var+altvalue}) to include the quoted variable IF it's set:
cat ${afile+"$afile"} ${empty+"$empty"} ${anotherfile+"$anotherfile"}
Since the double-quotes are in the alternate value string (not around the entire parameter expression), they only take effect if the variable is set. Note that you can use either + (which uses the alternate value if the variable is set) or :+ (which uses the alternate value if the variable is set AND not empty).

A pure bash solution is possible using arrays. While "$empty" will evaluate to an empty argument, "${empty[#]}" will expand to all the array fields, quoted, which are, in this case, none.
$ afile=(/dev/null)
$ unset empty
$ alsoempty=()
$ cat "${afile[#]}" "${empty[#]}" "${alsoempty[#]}"
In situations where arrays are not an option, refer to pasaba por aqui's more versatile answer.

Try with:
printf "%s\n%s\n%s\n" "$afile" "$empty" "$anotherfile" | egrep -v '^$' | tr '\n' '\0' | xargs -0 cat

In the case of a command like cat where you could replace an empty argument with an empty file, you can use the standard shell default replacement syntax:
cat "${file1:-/dev/null}" "${file2:-/dev/null}" "${file3:-/dev/null}"
Alternatively, you could create a concatenated output stream from the arguments which exist, either by piping (as shown below) or through process substitution:
{ [[ -n "$file1" ]] && cat "$file1";
[[ -n "$file2" ]] && cat "$file2";
[[ -n "$file3" ]] && cat "$file3"; } | awk ...
This could be simplified with a utility function:
cat_if_named() { [[ -n "$1" ]] && cat "$1"; }
In the particular case of cat to build up a new file, you could just do a series of appends:
# Start by emptying or creating the output file.
. > output_file
cat_if_named "$file1" >> output_file
cat_if_named "$file2" >> output_file
cat_if_named "$file3" >> output_file
If you need to retain the individual arguments -- for example, if you want to pass the list to grep, which will print the filename along with the matches -- you could build up an array of arguments, choosing only the arguments which exist:
args=()
[[ -n "$file1" ]] && args+=("$file1")
[[ -n "$file2" ]] && args+=("$file2")
[[ -n "$file3" ]] && args+=("$file3")
With bash 4.3 or better, you can use a nameref to make a utility function to do the above, which is almost certainly the most compact and general solution to the problem:
non_empty() {
declare -n _args="$1"
_args=()
shift
for arg; do [[ -n "$arg" ]] && _args+=("$arg"); done
}
eg:
non_empty my_args "$file1" "$file2" "$file3"
grep "$pattern" "${my_args[#]}"

Related

How can i add quotes around each words stored in a variable in shell script

I have a variable foo.
echo "print foo" "$foo" ---> abc,bc,cde
I wanted to put quotes around each variable.
Expected result = 'abc','bc','cde'.
I have tried this way, but its not working:
join_lines() {
local IFS=${1:-,}
set --
while IFS= read -r line; do set -- "$#" "$'line'"; done
echo "$*"
}
Could you please try following, strictly written and tested with shown samples in GNU awk.
Without loop:
var="abc,bc,cde"
echo "$var" | awk -v s1="'" 'BEGIN{FS=",";OFS="\047,\047"} {$1=$1;$0=s1 $0 s1} 1'
With loop usual way to go through all fields(comma separated):
var="abc,bc,cde"
echo "$var" | awk -v s1="'" 'BEGIN{FS=OFS=","} {for(i=1;i<=NF;i++){$i=s1 $i s1}} 1'
Output will be 'abc','bc','cde'.
As alternative, using 'sed: replacing every 'with'', and adding ' at the beginning and end of the line to wrap the first/last tokens.
sed -e "s/^/'/" -e "s/$/'/" -e "s/,/','/g"
On surface, the question is on how to convert comma separated list of values (stored in a shell variable) into a comma separate list of quoted tokens. Extending the logic provided by OP, but using shell arrays
foo="abc,bc,cde"
IFS=, read -a items <<< "$foo"
result=
for r in "${items[#]}" ; do
[ "$result" ] && result+=","
result+="'$r'"
done
echo "RESULT=$result"
If needed, logic can be placed into a function/filter
function join_lines {
local -a items
local input result
while IFS=, read -a items ; do
result=
for r in "${items[#]}" ; do
[ "$result" ] && result+=","
result+="'$r'"
done
echo "$result"
done
}

bash for loop with same order as GNU "ls -v" ("version-number" sort)

In a bash script I want to do a typical "for file in somedir" but I want the files to be processed in the same order that "ls -v" returns them. I know the downfalls of using "ls" as a function. Is there some way to replicate "-v" without using "ls"? Thanks.
Assuming that this is "version number" sort order, this is also implemented by GNU sort. Thus, on a GNU platform:
somedir=/foo
while IFS= read -r -d '' filename; do
printf 'Processing file: %q\n' "$filename"
done < <(set -- "$somedir"/*; [[ -e $1 || -L $1 ]] && printf '%s\0' "$#" | sort -z -V)
If you really want to use a for loop rather than a while loop, parse into an array and iterate over that:
files=( )
while IFS= read -r -d '' filename; do
files+=( "$filename" )
done < <(set -- "$somedir"/*; [[ -e $1 || -L $1 ]] && printf '%s\0' "$#" | sort -z -V)
for filename in "${files[#]}"; do
printf 'Processing file: %q\n' "$filename"
done
To explain some of the magic above:
In < <(...), <(...) is a process substitution. It's replaced with a filename which, when read from, will return the output of the code enclosed. Thus, < <(...) will put that process substitution's output as the input to the while read loop. This loop form is described in BashFAQ #1. The reasons to use this kind of redirection instead of piping into the loop are given in BashFAQ #24.
set -- "$somedir"/* replaces the argument list within the current context (that context being the subshell running the process substitution!) with the results of "$somedir"/*; thus, (non-hidden, by default) contents of the directory named in the variable somedir.
[[ -e $1 || -L $1 ]] is true only if that glob expanded to at least one item; if it remained * (and no actual filesystem object exists by that name), gating output on this condition prevents the process substitution from emitting any output.
sort -z tells sort to delimit elements in both input and output with NULs -- a character that isn't allowed to exist in filenames.

Loop through a comma-separated shell variable

Suppose I have a Unix shell variable as below
variable=abc,def,ghij
I want to extract all the values (abc, def and ghij) using a for loop and pass each value into a procedure.
The script should allow extracting arbitrary number of comma-separated values from $variable.
Not messing with IFS
Not calling external command
variable=abc,def,ghij
for i in ${variable//,/ }
do
# call your procedure/other scripts here below
echo "$i"
done
Using bash string manipulation http://www.tldp.org/LDP/abs/html/string-manipulation.html
You can use the following script to dynamically traverse through your variable, no matter how many fields it has as long as it is only comma separated.
variable=abc,def,ghij
for i in $(echo $variable | sed "s/,/ /g")
do
# call your procedure/other scripts here below
echo "$i"
done
Instead of the echo "$i" call above, between the do and done inside the for loop, you can invoke your procedure proc "$i".
Update: The above snippet works if the value of variable does not contain spaces. If you have such a requirement, please use one of the solutions that can change IFS and then parse your variable.
If you set a different field separator, you can directly use a for loop:
IFS=","
for v in $variable
do
# things with "$v" ...
done
You can also store the values in an array and then loop through it as indicated in How do I split a string on a delimiter in Bash?:
IFS=, read -ra values <<< "$variable"
for v in "${values[#]}"
do
# things with "$v"
done
Test
$ variable="abc,def,ghij"
$ IFS=","
$ for v in $variable
> do
> echo "var is $v"
> done
var is abc
var is def
var is ghij
You can find a broader approach in this solution to How to iterate through a comma-separated list and execute a command for each entry.
Examples on the second approach:
$ IFS=, read -ra vals <<< "abc,def,ghij"
$ printf "%s\n" "${vals[#]}"
abc
def
ghij
$ for v in "${vals[#]}"; do echo "$v --"; done
abc --
def --
ghij --
I think syntactically this is cleaner and also passes shell-check linting
variable=abc,def,ghij
for i in ${variable//,/ }
do
# call your procedure/other scripts here below
echo "$i"
done
#/bin/bash
TESTSTR="abc,def,ghij"
for i in $(echo $TESTSTR | tr ',' '\n')
do
echo $i
done
I prefer to use tr instead of sed, becouse sed have problems with special chars like \r \n in some cases.
other solution is to set IFS to certain separator
Another solution not using IFS and still preserving the spaces:
$ var="a bc,def,ghij"
$ while read line; do echo line="$line"; done < <(echo "$var" | tr ',' '\n')
line=a bc
line=def
line=ghij
Here is an alternative tr based solution that doesn't use echo, expressed as a one-liner.
for v in $(tr ',' '\n' <<< "$var") ; do something_with "$v" ; done
It feels tidier without echo but that is just my personal preference.
The following solution:
doesn't need to mess with IFS
doesn't need helper variables (like i in a for-loop)
should be easily extensible to work for multiple separators (with a bracket expression like [:,] in the patterns)
really splits only on the specified separator(s) and not - like some other solutions presented here on e.g. spaces too.
is POSIX compatible
doesn't suffer from any subtle issues that might arise when bash’s nocasematch is on and a separator that has lower/upper case versions is used in a match like with ${parameter/pattern/string} or case
beware that:
it does however work on the variable itself and pop each element from it - if that is not desired, a helper variable is needed
it assumes var to be set and would fail if it's not and set -u is in effect
while true; do
x="${var%%,*}"
echo $x
#x is not really needed here, one can of course directly use "${var%%:*}"
if [ -z "${var##*,*}" ] && [ -n "${var}" ]; then
var="${var#*,}"
else
break
fi
done
Beware that separators that would be special characters in patterns (e.g. a literal *) would need to be quoted accordingly.
Here's my pure bash solution that doesn't change IFS, and can take in a custom regex delimiter.
loop_custom_delimited() {
local list=$1
local delimiter=$2
local item
if [[ $delimiter != ' ' ]]; then
list=$(echo $list | sed 's/ /'`echo -e "\010"`'/g' | sed -E "s/$delimiter/ /g")
fi
for item in $list; do
item=$(echo $item | sed 's/'`echo -e "\010"`'/ /g')
echo "$item"
done
}
Try this one.
#/bin/bash
testpid="abc,def,ghij"
count=`echo $testpid | grep -o ',' | wc -l` # this is not a good way
count=`expr $count + 1`
while [ $count -gt 0 ] ; do
echo $testpid | cut -d ',' -f $i
count=`expr $count - 1 `
done

bash read inputs (1 mandatory and 1 optional) and grep these two variables then redirect result

read inputs (1 mandatory and 1 optional)
and grep these two variables from abc.txt
then redirect result to a new txt
read c d
while [ $# -ne 1 ]; do #why -ne not -ge as grep c when there is at least 1 argument
echo "Search result :"
grep "$c" abc.txt
else grep "$c" "$d" abc.txt
break
done
Tried lots of times, will either take c, d as one argument or just ignore my d argument. Do I need to use shift in this case?
$# is the number of command line arguments for your shell script. read doesn't change this value.
What you want is:
if [[ -z "$d" ]]; then
# one argument
else
# two or more arguments
fi
Alternatively, you call your with the arguments on the command line (i.e. ./script c d).
For this, replace read c d with:
c="$1"
shift
d="$*"
You can use ${d+value_if_set} to fill in a value to use when $d is present.
grep -e "$c" ${d+-e "$d"} abc.txt >new.txt
This adds a second -e argument after the first if $d is set. So you end up running grep -e "$c" -e "$d" abc.txt in this scenario.
I have read (long ago) some advice against using multiple -e arguments to grep but it works with GNU grep and OSX (*BSD) grep at least.
Alternatively, you could use grep -E and modify the regex:
grep -E "$c${d+|$d}" abc.txt >new.txt
Here, the regex is either $c or $c|$d. But you should note that the -E syntax also changes the semantics of what you put in $c and $d.

Search and replace variables in a file using bash/sed

I am trying to write a bash script(script.sh) to search and replace some variables in input.sh file. But I need to modify only the variables which are present in variable_list file and leave others as it is.
variable_list
${user}
${dbname}
input.sh
username=${user}
password=${password}
dbname=${dbname}
Expected output file
username=oracle
password=${password} > This line won't be changed as this variable(${password}) is not in variable_list file
dbname=oracle
Following is the script I am trying to use but I am not able to find the correct sed expression
script.sh
export user=oracle
export password=oracle123
export dbname=oracle
variable='variable_list'
while read line ;
do
if [[ -n $line ]]
then
sed -i 's/$line/$line/g' input.sh > output.sh
fi
done < "$variable"
This could work:
#!/bin/bash
export user=oracle
export password=oracle123
export dbname=oracle
variable='variable_list'
while read line ;
do
if [[ -n $line ]]
then
exp=$(sed -e 's/\$/\\&/g' <<< "$line")
var=$(sed -e 's/\${\([^}]\+\)}/\1/' <<< "$line")
sed -i "s/$exp/${!var}/g" input.sh
fi
done < "$variable"
The first sed expression escapes the $ which is a regex metacharacter. The second extracts just the variable name, then we use indirection to get the value in our current shell and use it in the sed expression.
Edit
Rather than rewriting the file so many times, it's probably more efficient to do it like this, building the arguments list for sed:
#!/bin/bash
export user=oracle
export password=oracle123
export dbname=oracle
while read var
do
exp=$(sed -e 's/\$/\\&/g' <<< "$var")
var=$(sed -e 's/\${\([^}]\+\)}/\1/' <<< "$var")
args+=("-e s/$exp/${!var}/g")
done < "variable_list"
sed "${args[#]}" input.sh > output.sh
user=oracle
password=oracle123
dbname=oracle
variable_list=( '${user}' '${dbname}' )
while IFS="=$IFS" read variable value; do
for subst_var in "${variable_list[#]}"; do
if [[ $subst_var = $value ]]; then
eval "value=$subst_var"
break
fi
done
printf "%s=%s\n" "$variable" "$value"
done < input.sh > output.sh
Here is a script.sh that works:
#!/bin/bash
user=oracle
password=oracle123
dbname=oracle
variable='variable_list'
text=$(cat input.sh)
while read line
do
value=$(eval echo $line)
text=$(sed "s/$line/$value/g" <<< "$text")
done < "$variable"
echo "$text" > output.sh
Note that your original version contains single quotes around the sed string, which doesn't insert the value of $line. It is trying to look for the literal line after the end of the line $ (which will never find anything).
Since you are looking for the value of the variable in $line, you need to do an eval to get this.
Also, since there are multiple variables you are looping over, the intermediate text variable stores the result as it loops.
The export keyword is also unnecessary in this script, unless it is being used in some sub-process not shown.
TXR solution. Build a filter dynamically. The filter is implemented internally as a trie data structure, which gives us a lex-like state machine which matches the entire dictionary at once as the input is scanned. For simplicity, we include the ${ and } as part of the variable name.
#(bind vars (("${user}" "oracle")
("${dbname}" "oracle")
("${password}" "letme1n")))
#(next "variable_list")
#(collect)
#entries
#(end)
#(deffilter subst . #(mapcar (op list #1 (second [find vars #1 equal first]))
entries))
#(next "input.sh")
#(collect)
#line
# (output :filter subst)
#line
# (end)
#(end)
Run:
$ txr subst.txr
username=oracle
password=${password}
dbname=oracle
input.sh: (as given)
username=${user}
password=${password}
dbname=${dbname}
variable_list: (as given)
${user}
${dbname}

Resources