How to correctly wrap multiple command calls in bash? - bash

My problem can be summed up by making this simple command works :
nice -n 10 "ls|xargs -I% echo \"%\""
Which fails :
nice: ls|xargs -I% echo "%": No such file or directory
Removing the quotes makes it works, but my point is to wrap multiple quoted commands into one to do something more complex like :
ftphost="192.168.1.1"
dirinputtopush="/tmp/archivedir/"
ftpoutputdir="mydir/"
nice -n 19 ls $dirinputtopush | xargs -I% "lftp $ftphost -e \"mirror -R $dirinputtopush% $ftpoutputdirrecent ;quit\"; sleep 10"

Try using nice -n 10 bash -c 'your; commands | or_complex pipelines' as command. This way bash is the binary and the string after -c contains a sequence interpreted by bash so it can contain pipelines, loops etc. Watch out for proper quoting. You need to do it this way because nice expects a binary, not expressions interpreted by the shell. In contrast, shell builtins such as time (but not /usr/bin/time which is a separate binary) will accept shell expressions as the command to execute. They can because they're built into the shell. nice is not, so it requires a binary to execute.

Children inherit nice value:
nice -n 10 bash -c 'ls | xargs -I% echo %'
Nice each command separately:
nice -n 10 ls | nice -n 10 xargs -I% echo %

Related

xargs output buffering -P parallel

I have a bash function that i call in parallel using xargs -P like so
echo ${list} | xargs -n 1 -P 24 -I# bash -l -c 'myAwesomeShellFunction #'
Everything works fine but output is messed up for obvious reasons (no buffering)
Trying to figure out a way to buffer output effectively. I was thinking I could use awk, but I'm not good enough to write such a script and I can't find anything worthwhile on google? Can someone help me write this "output buffer" in sed or awk? Nothing fancy, just accumulate output and spit it out after process terminates. I don't care the order that shell functions execute, just need their output buffered... Something like:
echo ${list} | xargs -n 1 -P 24 -I# bash -l -c 'myAwesomeShellFunction # | sed -u ""'
P.s. I tried to use stdbuf as per
https://unix.stackexchange.com/questions/25372/turn-off-buffering-in-pipe but did not work, i specified buffering on o and e but output still unbuffered:
echo ${list} | xargs -n 1 -P 24 -I# stdbuf -i0 -oL -eL bash -l -c 'myAwesomeShellFunction #'
Here's my first attempt, this only captures first line of output:
$ bash -c "echo stuff;sleep 3; echo more stuff" | awk '{while (( getline line) > 0 )print "got ",$line;}'
$ got stuff
This isn't quite atomic if your output is longer than a page (4kb typically), but for most cases it'll do:
xargs -P 24 bash -c 'for arg; do printf "%s\n" "$(myAwesomeShellFunction "$arg")"; done' _
The magic here is the command substitution: $(...) creates a subshell (a fork()ed-off copy of your shell), runs the code ... in it, and then reads that in to be substituted into the relevant position in the outer script.
Note that we don't need -n 1 (if you're dealing with a large number of arguments -- for a small number it may improve parallelization), since we're iterating over as many arguments as each of your 24 parallel bash instances is passed.
If you want to make it truly atomic, you can do that with a lockfile:
# generate a lockfile, arrange for it to be deleted when this shell exits
lockfile=$(mktemp -t lock.XXXXXX); export lockfile
trap 'rm -f "$lockfile"' 0
xargs -P 24 bash -c '
for arg; do
{
output=$(myAwesomeShellFunction "$arg")
flock -x 99
printf "%s\n" "$output"
} 99>"$lockfile"
done
' _

Running programs in parallel using xargs

I currently have the current script.
#!/bin/bash
# script.sh
for i in {0..99}; do
script-to-run.sh input/ output/ $i
done
I wish to run it in parallel using xargs. I have tried
script.sh | xargs -P8
But doing the above only executed once at the time. No luck with -n8 as well.
Adding & at the end of the line to be executed in the script for loop would try to run the script 99 times at once. How do I execute the loop only 8 at the time, up to 100 total.
From the xargs man page:
This manual page documents the GNU version of xargs. xargs reads items
from the standard input, delimited by blanks (which can be protected
with double or single quotes or a backslash) or newlines, and executes
the command (default is /bin/echo) one or more times with any initial-
arguments followed by items read from standard input. Blank lines on
the standard input are ignored.
Which means that for your example xargs is waiting and collecting all of the output from your script and then running echo <that output>. Not exactly all that useful nor what you wanted.
The -n argument is how many items from the input to use with each command that gets run (nothing, by itself, about parallelism here).
To do what you want with xargs you would need to do something more like this (untested):
printf %s\\n {0..99} | xargs -n 1 -P 8 script-to-run.sh input/ output/
Which breaks down like this.
printf %s\\n {0..99} - Print one number per-line from 0 to 99.
Run xargs
taking at most one argument per run command line
and run up to eight processes at a time
With GNU Parallel you would do:
parallel script-to-run.sh input/ output/ {} ::: {0..99}
Add in -P8 if you do not want to run one job per CPU core.
Opposite xargs it will do The Right Thing, even if the input contain space, ', or " (not the case here, though). It also makes sure the output from different jobs are not mixed together, so if you use the output you are guaranteed that you will not get half-a-line from two different jobs.
GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:
GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:
Installation
If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
$ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
fetch -o - http://pi.dk/3 ) > install.sh
$ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
12345678 883c667e 01eed62f 975ad28b 6d50e22a
$ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
cc21b4c9 43fd03e9 3ae1ae49 e28573c0
$ sha512sum install.sh | grep da012ec113b49a54e705f86d51e784ebced224fdf
79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
$ bash install.sh
For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel
You can use this simple 1 line command
seq 1 500 | xargs -n 1 -P 8 script-to-run.sh input/ output/
Here's an example running commands in parallel in conjuction with find:
find -name "*.wav" -print0 | xargs -0 -t -I % -P $(nproc) flac %
-print0 terminates filenames with a null byte rather than a newline so we can use -0 in xargs to prevent filenames with spaces being treated as two seperate arguments.
-t means verbose, makes xargs print every command it's executing, can be useful, remove if not needed.
-I % means replace occurrences of % in the command with arguments read from standard input.
-P $(nproc) means run a maximum of nproc instances of our command in parallel (nproc prints the number of available processing units).
flac % is our command, the -I % from earlier means this will become flac foo.wav
See also: Manual for xargs(1)

Makefile - "$" not taken into account for shell command

I have a Makefle with the following rule
bash -c "find . |grep -E '\.c$|\.h$|\.cpp$|\.hpp$|Makefile' | xargs cat | wc -l"
I'm expecting make to run the quoted bash script and to return the number of line in my project.
Running directly the command in a terminal does the work, but it doesn't work in makefile.
If I remove $ from the script, it does work ... but not as expected (since I only want *.{c,cpp,h,hpp,Makefile}.
Why bash -c doesn't run correctly my script?
if you write the rule like the following, it should produce the result you want:
target:
#echo $(shell find . | grep -E '\.c$$|\.h$$|\.cpp$$|\.hpp$$|Makefile' | xargs cat | wc -l)
In Makefiles, $ is used for make variables such as $(HEADERS), where HEADERS would have been defined previously using =.
To use a $ in inline bash, you have to double them to escape them. $$VAR will refer to a shell variable, and .c$$ and so on should escape the $ for the regex you're working with.
The following should suffice in escaping the $'s for what you're trying to accomplish
bash -c "find . |grep -E '\.c$$|\.h$$|\.cpp$$|\.hpp$$|Makefile' | xargs cat | wc -l"
Additionally, you can use bash globally in your Makefile as opposed to the default /bin/sh if you add this declaration:
SHELL = /bin/bash
With the above, you should be able to use the find command without needing the bash -c and quotes. The following should work if SHELL is defined as above:
find . |grep -E '\.c$$|\.h$$|\.cpp$$|\.hpp$$|Makefile' | xargs cat | wc -l
Also, note that you can, and will often see SubShells used for this purpose. These are created with (). This will make any variables defined by the inner shell local to that shell and its group of commands.

Shell Scripting: Using xargs to execute parallel instances of a shell function

I'm trying to use xargs in a shell script to run parallel instances of a function I've defined in the same script. The function times the fetching of a page, and so it's important that the pages are actually fetched concurrently in parallel processes, and not in background processes (if my understanding of this is wrong and there's negligible difference between the two, just let me know).
The function is:
function time_a_url ()
{
oneurltime=$($time_command -p wget -p $1 -O /dev/null 2>&1 1>/dev/null | grep real | cut -d" " -f2)
echo "Fetching $1 took $oneurltime seconds."
}
How does one do this with an xargs pipe in a form that can take number of times to run time_a_url in parallel as an argument? And yes, I know about GNU parallel, I just don't have the privilege to install software where I'm writing this.
Here's a demo of how you might be able to get your function to work:
$ f() { echo "[$#]"; }
$ export -f f
$ echo -e "b 1\nc 2\nd 3 4" | xargs -P 0 -n 1 -I{} bash -c f\ \{\}
[b 1]
[d 3 4]
[c 2]
The keys to making this work are to export the function so the bash that xargs spawns will see it and to escape the space between the function name and the escaped braces. You should be able to adapt this to work in your situation. You'll need to adjust the arguments for -P and -n (or remove them) to suit your needs.
You can probably get rid of the grep and cut. If you're using the Bash builtin time, you can specify an output format using the TIMEFORMAT variable. If you're using GNU /usr/bin/time, you can use the --format argument. Either of these will allow you to drop the -p also.
You can replace this part of your wget command: 2>&1 1>/dev/null with -q. In any case, you have those reversed. The correct order would be >/dev/null 2>&1.
On Mac OS X:
xargs: max. processes must be >0 (for: xargs -P [>0])
f() { echo "[$#]"; }
export -f f
echo -e "b 1\nc 2\nd 3 4" | sed 's/ /\\ /g' | xargs -P 10 -n 1 -I{} bash -c f\ \{\}
echo -e "b 1\nc 2\nd 3 4" | xargs -P 10 -I '{}' bash -c 'f "$#"' arg0 '{}'
If you install GNU Parallel on another system, you will see the functionality is in a single file (called parallel).
You should be able to simply copy that file to your own ~/bin.

Using xargs to assign stdin to a variable

All that I really want to do is make sure everything in a pipeline succeeded and assign the last stdin to a variable. Consider the following dumbed down scenario:
x=`exit 1|cat`
When I run declare -a, I see this:
declare -a PIPESTATUS='([0]="0")'
I need some way to notice the exit 1, so I converted it to this:
exit 1|cat|xargs -I {} x={}
And declare -a gave me:
declare -a PIPESTATUS='([0]="1" [1]="0" [2]="0")'
That is what I wanted, so I tried to see what would happen if the exit 1 didn't happen:
echo 1|cat|xargs -I {} x={}
But it fails with:
xargs: x={}: No such file or directory
Is there any way to have xargs assign {} to x? What about other methods of having PIPESTATUS work and assigning the stdin to a variable?
Note: these examples are dumbed down. I'm not really doing an exit 1, echo 1 or a cat, but used these commands to simplify so we can focus on my particular issue.
When you use backticks (or the preferred $()) you're running those commands in a subshell. The PIPESTATUS you're getting is for the assignment rather than the piped commands in the subshell.
When you use xargs, it knows nothing about the shell so it can't make variable assignments.
Try set -o pipefail then you can get the status from $?.
xargs is run in a child process, as are all the commands you call. So they can't effect the environment of your shell.
You might be able to do something with named pipes (mkfifo), or possible bash's read function?
EDIT:
Maybe just redirect the output to a file, then you can use PIPESTATUS:
command1 | command2 | command3 >/tmp/tmpfile
## Examine PIPESTATUS
X=$(cat /tmp/tmpfile)
How about ...
read x <<<"$(echo 1)"
read x < <(echo 1)
echo "$x"
Why not just populate a new array?
IFS=$'\n' read -r -d '' -a result < <(echo a | cat | cat; echo "PIPESTATUS='${PIPESTATUS[*]}'" )
IFS=$'\n' read -r -d '' -a result < <(echo a | exit 1 | cat; echo "PIPESTATUS='${PIPESTATUS[*]}'" )
echo "${#result[#]}"
echo "${result[#]}"
echo "${result[0]}"
echo "${result[1]}"
There are already a few helpful solutions. It turns out that I actually had an example that matches the question as framed above; close-enough anyway.
Consider this:
XX=$(ls -l *.cpp | wc -l | xargs -I{} echo {})
echo $XX
3
Meaning that I had 3 x .cpp files to in my working directory. Now $XX is 3 and I can make use of that result in my script. It is contrived, because I don't actually need the xargs in this example. It works though.
In the example from the question ...
x=`exit 1|cat`
I don't think that will give you what was specified. exit will quit the sub-shell before the cat gets a mention. Also on that note,
I might start with something like
declare -a PIPESTATUS='([0]="0")'
x=$?
x now has the status from the last command.
Assign each line of input to an array, e.g. all python files in a directory
declare -a pyFiles=($(ls -l *.py | awk '{print $9}'))
where $9 is the nineth field in ls -l corresponding to the filename

Resources