How do I store a bash command as string for multiple substitutions? - bash

I'm trying to clean up this script I have and this piece of code is annoying me because I know it can be more DRY:
if grep --version | grep "GNU" > /dev/null ;
then
grep -P -r -l "\x0d" $dir | grep "${fileRegex}"
else
grep -r -l "\x0d" $dir | grep "{$fileRegex}"
fi
My thoughts are to somehow conditionally set a string variable to either "grep -P" or "egrep" and then in a single line do something like:
$(cmdString) -r -l "\x0d" $dir | grep "${fileRegex}"
Or something like that but it doesn't work.

Are you worried about a host which has GNU grep but not egrep? Do such hosts exist?
If not why not just always use egrep? (Though -P and egrep are not the same thing.)
That being said you don't use strings for this (see BashFAQ#50).
You use arrays: grepcmd=(egrep) or grepcmd=(grep -P) and then "${grepcmd[#]}" ....
You can also avoid needing perl mode entirely if you use $'\r' or similar (assuming your shell understands that quoting method).

You can do this:
if grep --version | grep "GNU" > /dev/null
then
cmdString=(grep -P)
else
cmdString=(egrep)
fi
"${cmdString[#]}" -r -l "\x0d" "$dir" | grep "{$fileRegex}"

#Etan Reisner's suggestion worked well. For those that are interested in the final code (this case is for tabs, not windows line endings but it is similar):
fileRegex=${1:-".*\.java"}
if grep --version | grep "GNU" > /dev/null ;
then
cmdString=(grep -P)
else
cmdString=(grep)
fi
arr=$("${cmdString[#]}" -r -l "\x09" . | grep "${fileRegex}")
if [ -n "$dryRun" ]; then
for i in $arr; do echo "$i"; done
else
for i in $arr; do expand -t 7 "$i" > /tmp/e && mv /tmp/e "$i"; done
fi

Related

bash get command that was used before pipe symbol

For a half-finished script that already uses the output of a program I also need the name and the parameters of the program that was used to pipe to my script.
So I run it like this:
yay something | ./myscript
Now I need to store "yay something" into a variable.
There is a way to to get previous runned commands or the current one by using set -o history -o histexpand and echo !! or echo $0 but that doesn't include what I wrote right before the pipe.
Maybe you would suggest to pass the name of the program and it's parameter to my script as parameters and then run it there but I don't want this (pass a command as an argument to bash script).
UPDATED SOLUTION (old below):
#!/bin/bash -i
#get processes
processes=$(> >(ps -f))
echo beginning:
echo "$processes"
#filter bin/bash -i
pac=$(echo "$processes" | sed '1,/bin\/bash -i/!d')
pac=$(echo "$pac" | tail -2 | head -1)
#kill
delete=$(echo $pac | grep -oP "(?<=$USER\s)\w+")
pac=$(echo "$pac" | grep -o -P '(?<=00:00:00).*(?=)')
echo "$delete"
kill -9 "$delete"
#print
echo " "
echo end:
echo "${pac:1}"
Note: When you use echo, man or cat then $pac will be empty.
OLD Text:
Thanks to Charles for his enormous effort and his link that finally led me to processes=$(> >(ps -f)).
Here a working example. You can e.g. use it with vi test | ./testprocesses (or nano or package helpers like yay or trizen but it won't work with echo, man nor with cat):
#!/bin/bash -i
#get processes
processes=$(> >(ps -f))
echo beginning:
echo $processes
#filter
pac=$(echo $processes | grep -o -P '(?<=CM).*(?=testprocesses)' | grep -o -P '(?<=D).*(?=testprocesses)' | grep -o -P "(?<=00:00:00).*(?=$USER)")
#kill
delete=$(echo $pac | grep -oP "(?<=$USER\s)\w+")
pac=$(echo $pac | grep -o -P '(?<=00:00:00).*(?=)')
kill -9 $delete
#print
echo " "
echo end:
echo $pac
The kill part is necessary to kill the vi instance else it will still be running and eventually interfer with future executions of the script.

Bash - how do i output line and then pipe line to another command side by side? [duplicate]

cat a.txt | xargs -I % echo %
In the example above, xargs takes echo % as the command argument. But in some cases, I need multiple commands to process the argument instead of one. For example:
cat a.txt | xargs -I % {command1; command2; ... }
But xargs doesn't accept this form. One solution I know is that I can define a function to wrap the commands, but I want to avoid that because it is complex. Is there a better solution?
cat a.txt | xargs -d $'\n' sh -c 'for arg do command1 "$arg"; command2 "$arg"; ...; done' _
...or, without a Useless Use Of cat:
<a.txt xargs -d $'\n' sh -c 'for arg do command1 "$arg"; command2 "$arg"; ...; done' _
To explain some of the finer points:
The use of "$arg" instead of % (and the absence of -I in the xargs command line) is for security reasons: Passing data on sh's command-line argument list instead of substituting it into code prevents content that data might contain (such as $(rm -rf ~), to take a particularly malicious example) from being executed as code.
Similarly, the use of -d $'\n' is a GNU extension which causes xargs to treat each line of the input file as a separate data item. Either this or -0 (which expects NULs instead of newlines) is necessary to prevent xargs from trying to apply shell-like (but not quite shell-compatible) parsing to the stream it reads. (If you don't have GNU xargs, you can use tr '\n' '\0' <a.txt | xargs -0 ... to get line-oriented reading without -d).
The _ is a placeholder for $0, such that other data values added by xargs become $1 and onward, which happens to be the default set of values a for loop iterates over.
You can use
cat file.txt | xargs -i sh -c 'command {} | command2 {} && command3 {}'
{} = variable for each line on the text file
With GNU Parallel you can do:
cat a.txt | parallel 'command1 {}; command2 {}; ...; '
For security reasons it is recommended you use your package manager to
install. But if you cannot do that then you can use this 10 seconds
installation.
The 10 seconds installation will try to do a full installation; if
that fails, a personal installation; if that fails, a minimal
installation.
$ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
fetch -o - http://pi.dk/3 ) > install.sh
$ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
12345678 883c667e 01eed62f 975ad28b 6d50e22a
$ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
cc21b4c9 43fd03e9 3ae1ae49 e28573c0
$ sha512sum install.sh | grep da012ec113b49a54e705f86d51e784ebced224fdf
79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
$ bash install.sh
I prefer style which allows dry run mode (without | sh) :
cat a.txt | xargs -I % echo "command1; command2; ... " | sh
Works with pipes too:
cat a.txt | xargs -I % echo "echo % | cat " | sh
This is just another approach without xargs nor cat:
while read stuff; do
command1 "$stuff"
command2 "$stuff"
...
done < a.txt
This seems to be the safest version.
tr '[\n]' '[\0]' < a.txt | xargs -r0 /bin/bash -c 'command1 "$#"; command2 "$#";' ''
(-0 can be removed and the tr replaced with a redirect (or the file can be replaced with a null separated file instead). It is mainly in there since I mainly use xargs with find with -print0 output) (This might also be relevant on xargs versions without the -0 extension)
It is safe, since args will pass the parameters to the shell as an array when executing it. The shell (at least bash) would then pass them as an unaltered array to the other processes when all are obtained using ["$#"][1]
If you use ...| xargs -r0 -I{} bash -c 'f="{}"; command "$f";' '', the assignment will fail if the string contains double quotes. This is true for every variant using -i or -I. (Due to it being replaced into a string, you can always inject commands by inserting unexpected characters (like quotes, backticks or dollar signs) into the input data)
If the commands can only take one parameter at a time:
tr '[\n]' '[\0]' < a.txt | xargs -r0 -n1 /bin/bash -c 'command1 "$#"; command2 "$#";' ''
Or with somewhat less processes:
tr '[\n]' '[\0]' < a.txt | xargs -r0 /bin/bash -c 'for f in "$#"; do command1 "$f"; command2 "$f"; done;' ''
If you have GNU xargs or another with the -P extension and you want to run 32 processes in parallel, each with not more than 10 parameters for each command:
tr '[\n]' '[\0]' < a.txt | xargs -r0 -n10 -P32 /bin/bash -c 'command1 "$#"; command2 "$#";' ''
This should be robust against any special characters in the input. (If the input is null separated.) The tr version will get some invalid input if some of the lines contain newlines, but that is unavoidable with a newline separated file.
The blank first parameter for bash -c is due to this: (From the bash man page) (Thanks #clacke)
-c If the -c option is present, then commands are read from the first non-option argument com‐
mand_string. If there are arguments after the command_string, the first argument is assigned to $0
and any remaining arguments are assigned to the positional parameters. The assignment to $0 sets
the name of the shell, which is used in warning and error messages.
One thing I do is to add to .bashrc/.profile this function:
function each() {
while read line; do
for f in "$#"; do
$f $line
done
done
}
then you can do things like
... | each command1 command2 "command3 has spaces"
which is less verbose than xargs or -exec. You could also modify the function to insert the value from the read at an arbitrary location in the commands to each, if you needed that behavior also.
Another possible solution that works for me is something like -
cat a.txt | xargs bash -c 'command1 $#; command2 $#' bash
Note the 'bash' at the end - I assume it is passed as argv[0] to bash. Without it in this syntax the first parameter to each command is lost. It may be any word.
Example:
cat a.txt | xargs -n 5 bash -c 'echo -n `date +%Y%m%d-%H%M%S:` ; echo " data: " $#; echo "data again: " $#' bash
My current BKM for this is
... | xargs -n1 -I % perl -e 'system("echo 1 %"); system("echo 2 %");'
It is unfortunate that this uses perl, which is less likely to be installed than bash; but it handles more input that the accepted answer. (I welcome a ubiquitous version that does not rely on perl.)
#KeithThompson's suggestion of
... | xargs -I % sh -c 'command1; command2; ...'
is great - unless you have the shell comment character # in your input, in which case part of the first command and all of the second command will be truncated.
Hashes # can be quite common, if the input is derived from a filesystem listing, such as ls or find, and your editor creates temporary files with # in their name.
Example of the problem:
$ bash 1366 $> /bin/ls | cat
#Makefile#
#README#
Makefile
README
Oops, here is the problem:
$ bash 1367 $> ls | xargs -n1 -I % sh -i -c 'echo 1 %; echo 2 %'
1
1
1
1 Makefile
2 Makefile
1 README
2 README
Ahh, that's better:
$ bash 1368 $> ls | xargs -n1 -I % perl -e 'system("echo 1 %"); system("echo 2 %");'
1 #Makefile#
2 #Makefile#
1 #README#
2 #README#
1 Makefile
2 Makefile
1 README
2 README
$ bash 1369 $>
Try this:
git config --global alias.all '!f() { find . -d -name ".git" | sed s/\\/\.git//g | xargs -P10 -I{} git --git-dir={}/.git --work-tree={} $1; }; f'
It runs ten threads in parallel and does what ever git command you want to all repos in the folder structure. No matter if the repo is one or n levels deep.
E.g: git all pull
I have good idea to solve the problem.
Only write a comman mcmd, then you can do
find . -type f | xargs -i mcmd echo {} ## cat {} #pipe sed -n '1,3p'
The mcmd content as follows:
echo $* | sed -e 's/##/\n/g' -e 's/#pipe/|/g' | csh

Pass command via variable in shell

I have following code in my build script:
if [ -z "$1" ]; then
make -j10 $1 2>&1 | tee log.txt && notify-send -u critical -t 7 "BUILD DONE"
else
make -j10 $1 2>&1 | tee log.txt | grep -i --color "Error" && notify-send -u critical -t 7 "BUILD DONE"
fi
I tried to optimize it to:
local GREP=""
[[ ! -z "$1" ]] && GREP="| grep -i --color Error" && echo "Grepping for ERRORS"
make -j10 $1 2>&1 | tee log.txt "$GREP" && notify-send -u critical -t 7 "BUILD DONE"
But error thrown in make line if $1 isn't empty. I just can't figure out how to pass command with grep pipe through the variable.
Like others have already pointed out, you cannot, in general, expect a command in a variable to work. This is a FAQ.
What you can do is execute commands conditionally. Like this, for example:
( make -j10 $1 2>&1 && notify-send -u critical -t 7 "BUILD DONE" ) |
tee log.txt |
if [ -z "$1" ]; then
grep -i --color "Error"
else
cat
fi
This has the additional unexpected benefit that the notify-send is actually conditioned on the exit code of make (which is probably what you intended) rather than tee (which I would expect to succeed unless you run out of disk or something).
(Or if you want the notification regardless of the success status, change && to just ; -- I think this probably makes more sense.)
This is one of those rare Useful Uses of cat (although I still feel the urge to try to get rid of it!)
You can't put pipes in command variables:
$ foo='| cat'
$ echo bar $foo
bar | cat
The linked article explains how to do such things very well.
As mentioned in #l0b0's answer, the | will not be interpreted as you are hoping.
If you wanted to cut down on repetition, you could do something like this:
if [ $(make -j10 "$1" 2>&1 > log.txt) ]; then
[ "$1" ] && grep -i --color "error" log.txt
notify-send -u critical -t 7 "BUILD DONE"
fi
The inside of the test is common to both branches. Instead of using tee so that the output can be piped, you can just indirect the output to log.txt. If "$1" isn't empty, grep for any errors in log.txt. Either way, do the notify-send.

Unable to use wildcard for SSH command

There are a number of files that I have to check if they exist in a directory. They follow a standard naming convention aside from the file extension so I want to use a wild card e.g:
YYYYMM=201403
FILE_LIST=`cat config.txt`
for file in $FILE_LIST
do
FILE=`echo $file | cut -f1 -d"~"`
SEARCH_NAME=$FILE$YYYYMM
ANSWER=`ssh -q userID#servername 'ls /home/to/some/directory/$SEARCH_NAME* | wc -l'`
returnStatus=$?
if [ $ANSWER=1 ]; then
echo "FILE FOUND"
else
echo "FILE NOT FOUND"
fi
done
The wildcard is not working, any ideas for how to make it visible to the shell?
I had much the same question just now. In despair, I just gave up and used pipes with grep and xargs to get wildcard-like functionality.
Was (none of these worked - and tried others):
ssh -t r#host "rm /path/to/folder/alpha*"
ssh -t r#host "rm \"/path/to/folder/alpha*\" "
ssh -t r#host "rm \"/path/to/folder/alpha\*\" "
Is:
ssh -t r#host "cd /path/to/folder/ && ls | grep alpha | xargs rm"
Note: I did much of my troubleshooting with ls instead of rm, just in case I surprised myself.
It's way better to use STDIN:
echo "rm /path/to/foldef/alpha*" | ssh r#host sh
With this way you can still use shell variables to construct the command. e.g.:
echo "rm -r $oldbackup/*" | ssh r#host sh

What is the equivalent to xargs -r under OsX

Are they any equivalent under OSX to the xargs -r under Linux ? I'm trying to find a way to interupt a pipe if there's no data.
For instance imagine you do the following:
touch test
cat test | xargs -r echo "content: "
That doesn't yield any result because xargs interrupts the pipe.
Is there either some hidden xargs option or something else to achieve the same result under OSX?
The POSIX standard for xargs mandates that the command be executed once, even if there are no arguments. This is a nuisance, which is why GNU xargs has the -r option. Unfortunately, neither BSD (MacOS X) nor the other mainstream Unix versions (AIX, HP-UX, Solaris) support it.
If it is crucial to you, obtain and install GNU xargs somewhere that your environment will find it, without affecting the system (so don't replace /usr/bin/xargs unless you're a braver man than I am — but /usr/local/bin/xargs might be OK, or $HOME/bin/xargs, or …).
You can use test or [:
if [ -s test ] ; then cat test | xargs echo content: ; fi
There is no standard way to determine if the xargs you are running is GNU or not. I set $gnuargs to either "true" or "false" and then have a function that replaces xargs and does the right thing.
On Linux, FreeBSD and MacOS this script works for me. The POSIX standard for xargs mandates that the command be executed once, even if there are no arguments. FreeBSD and MacOS X violate this rule, thus don't need "-r". GNU finds it annoying, and adds -r. This script does the right thing and can be enhanced if you find a version of Unix that does it some other way.
#!/bin/bash
gnuxargs=$(xargs --version 2>&1 |grep -s GNU >/dev/null && echo true || echo false)
function portable_xargs_r() {
if $gnuxargs ; then
cat - | xargs -r "$#"
else
cat - | xargs "$#"
fi
}
echo 'this' > foo
echo '=== Expect one line'
portable_xargs_r <foo echo "content: "
echo '=== DONE.'
cat </dev/null > foo
echo '=== Expect zero lines'
portable_xargs_r <foo echo "content: "
echo '=== DONE.'
Here's a quick and dirty xargs-r using a temporary file.
#!/bin/sh
t=$(mktemp -t xargsrXXXXXXXXX) || exit
trap 'rm -f $t' EXIT HUP INT TERM
cat >"$t"
test -s "$t" || exit
exec xargs "$#" <"$t"
with POSIX xargs¹, to avoid running the-command when the input is empty, you could use moreutils's ifne (for if not empty):
... | ifne xargs ... the-command ...
Or use a sh wrapper that checks the number of arguments:
... | xargs ... sh -c '[ "$#" -eq 0 ] || exec the-command ... "$#"' sh
¹ though one can hardly use xargs POSIXly as it doesn't support -0, has unspecified behaviour when the input is non-text (like for filenames which on most systems are not guaranteed to be text except in the POSIX locale), parses its input in a very arcane way and that is locale-dependant, and doesn't give any guarantee if any word is more than 255 bytes long!
You could make sure that the input always has at least one line. This may not always be possible, but you'd be surprised how many creative ways this can be done.
A typical use case looks like:
find . -print0 | xargs -r -0 grep PATTERN
Some versions of xargs do not have an -r flag. In that case, you can supply /dev/null as the first filename so that grep is never handed an empty list of filenames. Since the pattern will never be found in /dev/null, this won't affect the output:
find . -print0 | xargs -0 grep PATTERN /dev/null
You can test if the stream has any content:
cat test | { if IFS= read -r tmp; then { printf "%s\n" "$tmp"; cat; } | xargs echo "content: "; fi; }
# ^^^ - otherwise just do nothing
# ^^^^^^^^^^^^^^^^^^^^^^^ - to xargs
# ^^^ - and the rest of input
# ^^^^^^^^^^^^^^^^^^^^^^ - redirect first line
# ^^^^^^^^^^^^^^^^^^^ - try reading anything
# or with a function
# even TODO: add the check of `portable_xargs_r` in the other answer and call `xargs -r` when available.
xargs_r() {
if IFS= read -r tmp; then
{ printf "%s\n" "$tmp"; cat; } | xargs "$#"
fi
}
cat test | xargs_r echo "content: "
This method runs the check inside the pipe inside the subshell, so it effectively can be used in a complicated pipe setup.

Resources