Shellscript evaluate command vs command difference: multiple $(uuidgen) get me same result - shell

uuidgen is suppose to generate a random uuid for every call. thinking below command:
╰─○ cat a.txt | xargs -I {} -L 1 sh -c "uuidgen"
54693322-1ABF-4FCB-96E5-90EC0F4AC33E
9F1BA4CF-5612-46D7-90E9-EE653F0396FE
25F5D853-03BA-42F7-9FF8-1D3E124D09B3
046A348E-3FC0-414A-8469-21A016147245
This is Good, but below command will give me same uuids:
╰─○ cat a.txt | xargs -I {} -L 1 sh -c "echo $(uuidgen)"
7477A621-331C-4727-8471-677528BC79AC
7477A621-331C-4727-8471-677528BC79AC
7477A621-331C-4727-8471-677528BC79AC
7477A621-331C-4727-8471-677528BC79AC

The command substitution $(..) inside double quotes " is expanded before the command is run by the shell. So you are running:
xargs -I {} -L 1 sh -c "echo 7477A621-331C-4727-8471-677528BC79AC"
and it will print same value each time. Debug script with set -x. See xargs -t output. Check your scripts with shellcheck.net .
If you want to pass the string echo $(uuidgen) as-is to the shell spawned by xargs you have to quote it.
xargs sh -c "echo \$(uuidgen)"
# or
xargs sh -c 'echo $(uuidgen)'

Related

xargs output buffering -P parallel

I have a bash function that i call in parallel using xargs -P like so
echo ${list} | xargs -n 1 -P 24 -I# bash -l -c 'myAwesomeShellFunction #'
Everything works fine but output is messed up for obvious reasons (no buffering)
Trying to figure out a way to buffer output effectively. I was thinking I could use awk, but I'm not good enough to write such a script and I can't find anything worthwhile on google? Can someone help me write this "output buffer" in sed or awk? Nothing fancy, just accumulate output and spit it out after process terminates. I don't care the order that shell functions execute, just need their output buffered... Something like:
echo ${list} | xargs -n 1 -P 24 -I# bash -l -c 'myAwesomeShellFunction # | sed -u ""'
P.s. I tried to use stdbuf as per
https://unix.stackexchange.com/questions/25372/turn-off-buffering-in-pipe but did not work, i specified buffering on o and e but output still unbuffered:
echo ${list} | xargs -n 1 -P 24 -I# stdbuf -i0 -oL -eL bash -l -c 'myAwesomeShellFunction #'
Here's my first attempt, this only captures first line of output:
$ bash -c "echo stuff;sleep 3; echo more stuff" | awk '{while (( getline line) > 0 )print "got ",$line;}'
$ got stuff
This isn't quite atomic if your output is longer than a page (4kb typically), but for most cases it'll do:
xargs -P 24 bash -c 'for arg; do printf "%s\n" "$(myAwesomeShellFunction "$arg")"; done' _
The magic here is the command substitution: $(...) creates a subshell (a fork()ed-off copy of your shell), runs the code ... in it, and then reads that in to be substituted into the relevant position in the outer script.
Note that we don't need -n 1 (if you're dealing with a large number of arguments -- for a small number it may improve parallelization), since we're iterating over as many arguments as each of your 24 parallel bash instances is passed.
If you want to make it truly atomic, you can do that with a lockfile:
# generate a lockfile, arrange for it to be deleted when this shell exits
lockfile=$(mktemp -t lock.XXXXXX); export lockfile
trap 'rm -f "$lockfile"' 0
xargs -P 24 bash -c '
for arg; do
{
output=$(myAwesomeShellFunction "$arg")
flock -x 99
printf "%s\n" "$output"
} 99>"$lockfile"
done
' _

Bash - how do i output line and then pipe line to another command side by side? [duplicate]

cat a.txt | xargs -I % echo %
In the example above, xargs takes echo % as the command argument. But in some cases, I need multiple commands to process the argument instead of one. For example:
cat a.txt | xargs -I % {command1; command2; ... }
But xargs doesn't accept this form. One solution I know is that I can define a function to wrap the commands, but I want to avoid that because it is complex. Is there a better solution?
cat a.txt | xargs -d $'\n' sh -c 'for arg do command1 "$arg"; command2 "$arg"; ...; done' _
...or, without a Useless Use Of cat:
<a.txt xargs -d $'\n' sh -c 'for arg do command1 "$arg"; command2 "$arg"; ...; done' _
To explain some of the finer points:
The use of "$arg" instead of % (and the absence of -I in the xargs command line) is for security reasons: Passing data on sh's command-line argument list instead of substituting it into code prevents content that data might contain (such as $(rm -rf ~), to take a particularly malicious example) from being executed as code.
Similarly, the use of -d $'\n' is a GNU extension which causes xargs to treat each line of the input file as a separate data item. Either this or -0 (which expects NULs instead of newlines) is necessary to prevent xargs from trying to apply shell-like (but not quite shell-compatible) parsing to the stream it reads. (If you don't have GNU xargs, you can use tr '\n' '\0' <a.txt | xargs -0 ... to get line-oriented reading without -d).
The _ is a placeholder for $0, such that other data values added by xargs become $1 and onward, which happens to be the default set of values a for loop iterates over.
You can use
cat file.txt | xargs -i sh -c 'command {} | command2 {} && command3 {}'
{} = variable for each line on the text file
With GNU Parallel you can do:
cat a.txt | parallel 'command1 {}; command2 {}; ...; '
For security reasons it is recommended you use your package manager to
install. But if you cannot do that then you can use this 10 seconds
installation.
The 10 seconds installation will try to do a full installation; if
that fails, a personal installation; if that fails, a minimal
installation.
$ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
fetch -o - http://pi.dk/3 ) > install.sh
$ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
12345678 883c667e 01eed62f 975ad28b 6d50e22a
$ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
cc21b4c9 43fd03e9 3ae1ae49 e28573c0
$ sha512sum install.sh | grep da012ec113b49a54e705f86d51e784ebced224fdf
79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
$ bash install.sh
I prefer style which allows dry run mode (without | sh) :
cat a.txt | xargs -I % echo "command1; command2; ... " | sh
Works with pipes too:
cat a.txt | xargs -I % echo "echo % | cat " | sh
This is just another approach without xargs nor cat:
while read stuff; do
command1 "$stuff"
command2 "$stuff"
...
done < a.txt
This seems to be the safest version.
tr '[\n]' '[\0]' < a.txt | xargs -r0 /bin/bash -c 'command1 "$#"; command2 "$#";' ''
(-0 can be removed and the tr replaced with a redirect (or the file can be replaced with a null separated file instead). It is mainly in there since I mainly use xargs with find with -print0 output) (This might also be relevant on xargs versions without the -0 extension)
It is safe, since args will pass the parameters to the shell as an array when executing it. The shell (at least bash) would then pass them as an unaltered array to the other processes when all are obtained using ["$#"][1]
If you use ...| xargs -r0 -I{} bash -c 'f="{}"; command "$f";' '', the assignment will fail if the string contains double quotes. This is true for every variant using -i or -I. (Due to it being replaced into a string, you can always inject commands by inserting unexpected characters (like quotes, backticks or dollar signs) into the input data)
If the commands can only take one parameter at a time:
tr '[\n]' '[\0]' < a.txt | xargs -r0 -n1 /bin/bash -c 'command1 "$#"; command2 "$#";' ''
Or with somewhat less processes:
tr '[\n]' '[\0]' < a.txt | xargs -r0 /bin/bash -c 'for f in "$#"; do command1 "$f"; command2 "$f"; done;' ''
If you have GNU xargs or another with the -P extension and you want to run 32 processes in parallel, each with not more than 10 parameters for each command:
tr '[\n]' '[\0]' < a.txt | xargs -r0 -n10 -P32 /bin/bash -c 'command1 "$#"; command2 "$#";' ''
This should be robust against any special characters in the input. (If the input is null separated.) The tr version will get some invalid input if some of the lines contain newlines, but that is unavoidable with a newline separated file.
The blank first parameter for bash -c is due to this: (From the bash man page) (Thanks #clacke)
-c If the -c option is present, then commands are read from the first non-option argument com‐
mand_string. If there are arguments after the command_string, the first argument is assigned to $0
and any remaining arguments are assigned to the positional parameters. The assignment to $0 sets
the name of the shell, which is used in warning and error messages.
One thing I do is to add to .bashrc/.profile this function:
function each() {
while read line; do
for f in "$#"; do
$f $line
done
done
}
then you can do things like
... | each command1 command2 "command3 has spaces"
which is less verbose than xargs or -exec. You could also modify the function to insert the value from the read at an arbitrary location in the commands to each, if you needed that behavior also.
Another possible solution that works for me is something like -
cat a.txt | xargs bash -c 'command1 $#; command2 $#' bash
Note the 'bash' at the end - I assume it is passed as argv[0] to bash. Without it in this syntax the first parameter to each command is lost. It may be any word.
Example:
cat a.txt | xargs -n 5 bash -c 'echo -n `date +%Y%m%d-%H%M%S:` ; echo " data: " $#; echo "data again: " $#' bash
My current BKM for this is
... | xargs -n1 -I % perl -e 'system("echo 1 %"); system("echo 2 %");'
It is unfortunate that this uses perl, which is less likely to be installed than bash; but it handles more input that the accepted answer. (I welcome a ubiquitous version that does not rely on perl.)
#KeithThompson's suggestion of
... | xargs -I % sh -c 'command1; command2; ...'
is great - unless you have the shell comment character # in your input, in which case part of the first command and all of the second command will be truncated.
Hashes # can be quite common, if the input is derived from a filesystem listing, such as ls or find, and your editor creates temporary files with # in their name.
Example of the problem:
$ bash 1366 $> /bin/ls | cat
#Makefile#
#README#
Makefile
README
Oops, here is the problem:
$ bash 1367 $> ls | xargs -n1 -I % sh -i -c 'echo 1 %; echo 2 %'
1
1
1
1 Makefile
2 Makefile
1 README
2 README
Ahh, that's better:
$ bash 1368 $> ls | xargs -n1 -I % perl -e 'system("echo 1 %"); system("echo 2 %");'
1 #Makefile#
2 #Makefile#
1 #README#
2 #README#
1 Makefile
2 Makefile
1 README
2 README
$ bash 1369 $>
Try this:
git config --global alias.all '!f() { find . -d -name ".git" | sed s/\\/\.git//g | xargs -P10 -I{} git --git-dir={}/.git --work-tree={} $1; }; f'
It runs ten threads in parallel and does what ever git command you want to all repos in the folder structure. No matter if the repo is one or n levels deep.
E.g: git all pull
I have good idea to solve the problem.
Only write a comman mcmd, then you can do
find . -type f | xargs -i mcmd echo {} ## cat {} #pipe sed -n '1,3p'
The mcmd content as follows:
echo $* | sed -e 's/##/\n/g' -e 's/#pipe/|/g' | csh

running a pipe command with variable substitution on remote host

I'd to run a piped command with variable substitution on a remote host and redirect the output. Given that the login shell is csh, I have to used "bash -c". With help from users nlrc and jerdiggity, a command with no variable substitution can be formulated as:
localhost$ ssh -f -q remotehost 'bash -c "ls /var/tmp/ora_flist.sh|xargs -L1 cat >/var/tmp/1"'
but the single quote above will preclue using variable substitution, say, substituting ora_flist.sh for $filename. how can I accomplish that?
Thanks.
Something like this should work:
ssh -f -q remotehost 'bash -c "ls /var/tmp/ora_flist.sh|xargs -L1 cat >/var/tmp/1"'
So your problem was that you want the shell variable to be extended locally. Just leave it outside the single quotes, e.g.
ssh -f -q remotehost 'bash -c "ls '$filename' | xargs ..."'
Also very useful trick to avoid the quoting hell is to use heredoc, e.g.
ssh -f -q remotehost <<EOF
bash -c "ls $filename | xargs ... "
EOF

Pass contents of LaTeX statement to command

I can extract the contents of all the \include statements from a latex file (and append ".tex" to each one) with
grep -P "\\\\include{" Thesis_master.tex |sed -n -e"s/\\\\include{/$1/" -e" s/}.*$/.tex/p"
(I couldn't get lookbehinds working in grep, hence the pipe through sed. That gives me a list of filenames, one per line. I'd now like to pass those files to aspell. But aspell only accepts one filename as an argument, so I can't just tack |xargs aspell -c on the end.
I've read this related question but that reads from a file line by line through xargs. So how do I get it to read from the output of sed line by line?
I think xargs -L 1 should do what you need:
grep -P "\\\\include{" Thesis_master.tex | \
sed -n -e"s/\\\\include{/$1/" -e" s/}.*$/.tex/p" | \
xargs -L 1 aspell -c
(Backslash line continuation added for readability)
This will cause xargs to call aspell exactly once per line from the sed pipe.
Since your aspell commands appear to exit with a 255 code, this causes xargs to stop. You could trick xargs into not exiting by doing something like:
grep -P "\\\\include{" Thesis_master.tex | \
sed -n -e"s/\\\\include{/$1/" -e" s/}.*$/.tex/p" | \
xargs -L 1 -I % bash -c "aspell -c %; true"
This will run aspell in a subshell, followed by the true command which always return a 0 exit code to xargs.
The grep recipe is:
grep -oP '\\include{\K.+?(?=})' latex.file | xargs aspell ...

Shell Scripting: Using xargs to execute parallel instances of a shell function

I'm trying to use xargs in a shell script to run parallel instances of a function I've defined in the same script. The function times the fetching of a page, and so it's important that the pages are actually fetched concurrently in parallel processes, and not in background processes (if my understanding of this is wrong and there's negligible difference between the two, just let me know).
The function is:
function time_a_url ()
{
oneurltime=$($time_command -p wget -p $1 -O /dev/null 2>&1 1>/dev/null | grep real | cut -d" " -f2)
echo "Fetching $1 took $oneurltime seconds."
}
How does one do this with an xargs pipe in a form that can take number of times to run time_a_url in parallel as an argument? And yes, I know about GNU parallel, I just don't have the privilege to install software where I'm writing this.
Here's a demo of how you might be able to get your function to work:
$ f() { echo "[$#]"; }
$ export -f f
$ echo -e "b 1\nc 2\nd 3 4" | xargs -P 0 -n 1 -I{} bash -c f\ \{\}
[b 1]
[d 3 4]
[c 2]
The keys to making this work are to export the function so the bash that xargs spawns will see it and to escape the space between the function name and the escaped braces. You should be able to adapt this to work in your situation. You'll need to adjust the arguments for -P and -n (or remove them) to suit your needs.
You can probably get rid of the grep and cut. If you're using the Bash builtin time, you can specify an output format using the TIMEFORMAT variable. If you're using GNU /usr/bin/time, you can use the --format argument. Either of these will allow you to drop the -p also.
You can replace this part of your wget command: 2>&1 1>/dev/null with -q. In any case, you have those reversed. The correct order would be >/dev/null 2>&1.
On Mac OS X:
xargs: max. processes must be >0 (for: xargs -P [>0])
f() { echo "[$#]"; }
export -f f
echo -e "b 1\nc 2\nd 3 4" | sed 's/ /\\ /g' | xargs -P 10 -n 1 -I{} bash -c f\ \{\}
echo -e "b 1\nc 2\nd 3 4" | xargs -P 10 -I '{}' bash -c 'f "$#"' arg0 '{}'
If you install GNU Parallel on another system, you will see the functionality is in a single file (called parallel).
You should be able to simply copy that file to your own ~/bin.

Resources