xargs with variable number of arguments - shell

I am trying to create a command to show logs for all running allocations in a Nomad cluster. I can get the allocation IDs with this command:
curl -s $NOMAD_ADDR/v1/allocations | jq -r '.[] | select(.JobID=="MY_JOB_NAME") | "\(.ID)"'
From there I would like to run multitail with nomad logs <allocation> -tail -f for each allocation so that I can watch all of the logs at once. The syntax of a multitail call looks like this:
multitail [options] -l "shell 1 command" -l "shell 2 command" -l...
If you open 5 shells, then you need 5 -l arguments.
I do not see this functionality in the xargs manual, but I need something like xargs multitail --arg-for-each "-l my shell command {}". Is it possible to use xargs to construct commands with variable numbers of arguments in this way? If not, is there any alternative that I can use to do so?

Your input is a list of ids. For the purposes of this answer, let's say that looks like:
foo
bar
baz
You want to transform this into:
multitail \
-l "nomad logs foo -tail -f" \
-l "nomad logs bar -tail -f" \
-l "nomad logs baz -tail -f"
Perhaps something like this would work:
eval multitail $(command_that_generates_ids | xargs -IID echo "-l 'nomad logs ID -tail -f'")

Related

How to run a command like xargs on a grep output of a pipe of a previous xargs from a command in Bash

I'm trying to understand what's happening here out of curiosity, even though I can just copy and paste the output of the terminal to do what I need to do. The following command does not print anything.
ls /opt/local/var/macports/registry/portfiles -1 | sed 's/-.*//g' | sort -u | parallel "sudo port -N install" {} 2>&1 | grep -Po "Use '\K.*(?=')" | parallel "{}"
The directory I call ls on contains a bunch of filenames starting with the string I want to extract that ends at the first dash (so stringexample-4.2009 pipes stringexample into parallel (like xargs but to run each line separately). After running the command sudo port install <stringexample>, I get error outputs like so:
Unable to activate port <stringexample>. Use 'port -f activate <stringexample>' to force the activation.
Now, I wish to run port -f activate <stringexample>. However, I cannot seem to do anything with the output port -f activate gettext that I get to the terminal.
I cannot even do ... | grep -Po "Use '\K.*(?=')" | xargs echo or ... | grep -Po "Use '\K.*(?=')" >> commands_to_run.txt (the output stream to file only creates an empty file), despite the shorter part of the command:
ls /opt/local/var/macports/registry/portfiles -1 | sed 's/-.*//g' | sort -u | parallel "sudo port -N install {}" 2>&1 | grep -Po "Use '\K.*(?=')"
printing the commands to the terminal. Why does the pipe operator not work here? If the commands I wish to run are outputting to the terminal, surely there's got to be a way to capture them.

How do I pass multiple query parameters by xargs into httpie?

I tried below to pass a parameters to httpie and it turned into POST method unexpectedly.
1)
$ echo "a1 b1" | xargs -t -n2 bash -c 'http -v https://httpbin.org/anything arg1==$0 arg2==$1'
bash -c http -v https://httpbin.org/anything arg1==$0 arg2==$1 a1 b1
2)
$ echo "arg1==a1 arg2==b1" | xargs -t -n2 bash -c 'http -v https://httpbin.org/anything'
bash -c http -v https://httpbin.org/anything arg1==a1 arg2==b1
The 1st one returns below and seem like there're additional "a1 b1" inhibit proper request.
bash -c http -v https://httpbin.org/anything arg1==$0 arg2==$1 a1 b1
The 2nd one returns seemingly not too far but actual method turned into the POST.
Is there any way to pass multiple parameters to httpie?
Here is a way to accomplish your goal:
echo "a1 b1" |
awk '{print "http -v https://httpbin.org/anything arg1=="$1" arg2=="$2}' |
bash
Even if manually insert the strings like:
$ echo 'http -v https://httpbin.org/anything arg1==a1 arg2==b2' | bash
doesn't work same as below:
$ http -v https://httpbin.org/anything arg1==a1 arg2==b2
I don't get the cause of this happening but simply if I specify the method, It worked.
$ echo "a1 b1" | xargs -t -n2 bash -c 'http -v GET https://httpbin.org/anything arg1==$0 arg2==$1
^^^
and I think I got the caused it's due to stdin so it can be avoid by --ignore-stdin option.

Fish shell input redirection from subshell output

When I want to run Wireshark locally to display a packet capture running on another machine, this works on bash, using input redirection from the output of a subshell:
wireshark -k -i <(ssh user#machine "sudo dumpcap -P -w - -f '<filter>' -i eth0")
From what I could find, the syntax for similar behavior on the fish shell is the same but when I run that command on fish, I get the Wireshark output on the terminal but can't see the Wireshark window.
Is there something I'm missing?
What you're using there in bash is process substitution (the <() syntax). It is a bash specific syntax (although zsh adopted this same syntax along with its own =()).
fish does have process substitution under a different syntax ((process | psub)). For example:
wireshark -k -i (ssh user#machine "sudo dumpcap -P -w - -f '<filter>' -i eth0" | psub)
bash | equivalent in fish
----------- | ------------------
cat <(ls) | cat (ls|psub)
ls > >(cat) | N/A (need to find a way to use a pipe, e.g. ls|cat)
The fish equivalent of <() isn't well suited to this use case. Is there some reason you can't use this simpler and more portable formulation?
ssh user#machine "sudo dumpcap -P -w - -f '<filter>' -i eth0" | wireshark -k -i -

Command composition in bash

So I have the equivalent of a list of files being output by another command, and it looks something like this:
http://somewhere.com/foo1.xml.gz
http://somewhere.com/foo2.xml.gz
...
I need to run the XML in each file through xmlstarlet, so I'm doing ... | xargs gzip -d | xmlstarlet ..., except I want xmlstarlet to be called once for each line going into gzip, not on all of the xml documents appended to each other. Is it possible to compose 'gzip -d' 'xmlstarlet ...', so that xargs will supply one argument to each of their composite functions?
Why not read your file and process each line separately in the shell? i.e.
fileList=/path/to/my/xmlFileList.txt
cat ${fileList} \
| while read fName ; do
gzip -d ${fName} | xmlstartlet > ${fName}.new
done
I hope this helps.
Although the right answer is the one suggested by shelter (+1), here is a one-liner "divertimento" providing that the input is the proposed by Andrey (a command that generates the list of urls) :-)
~$ eval $(command | awk '{a=a "wget -O - "$0" | gzip -d | xmlstartlet > $(basename "$0" .gz ).new; " } END {print a}')
It just generates a multi command line that does wget http://foo.xml.gz | gzip -d | xmlstartlet > $(basenname foo.xml.gz .gz).new for each of the urls in the input; after the resulting command is evaluated
Use GNU Parallel:
cat filelist | parallel 'zcat {} | xmlstarlet >{.}.out'
or if you want to include the fetching of urls:
cat urls | parallel 'wget -O - {} | zcat | xmlstarlet >{.}.out'
It is easy to read and you get the added benefit of having on job per CPU run in parallel. Watch the intro video to learn more: http://www.youtube.com/watch?v=OpaiGYxkSuQ
If xmlstarlet can operate on stdin instead of having to pass it a filename, then:
some command | xargs -i -n1 sh -c 'zcat "{}" | xmlstarlet options ...'
The xargs option -i means you can use the "{}" placeholder to indicate where the filename should go. Use -n 1 to indicate xargs should only one line at a time from its input.

bash: comment a long pipeline

I've found that it's quite powerful to create long pipelines in bash scripts, but the main drawback that I see is that there doesn't seem to be a way to insert comments.
As an example, is there a good way to add comments to this script?
#find all my VNC sessions
ls -t $HOME/.vnc/*.pid \
| xargs -n1 \
| sed 's|\.pid$||; s|^.*\.vnc/||g' \
| xargs -P50 --replace vncconfig -display {} -get desktop \
| grep "($USER)" \
| awk '{print $1}' \
| xargs -n1 xdpyinfo -display \
| egrep "^name|dimensions|depths"
Let the pipe be the last character of each line and use # instead of \, like this:
ls -t $HOME/.vnc/*.pid | #comment here
xargs -n1 | #another comment
...
This works too:
# comment here
ls -t $HOME/.vnc/*.pid |
#comment here
xargs -n1 |
#another comment
...
based on https://stackoverflow.com/a/5100821/1019205.
it comes down to s/|//;s!\!|!.
Unless they're spectacularly long pipelines, you don't have to comment inline, just comment at the top:
# Find all my VNC sessions.
# xargs does something.
# sed does something else
# the second xargs destroys the universe.
# :
# and so on.
ls -t $HOME/.vnc/*.pid \
| xargs -n1 \
| sed 's|\.pid$||; s|^.*\.vnc/||g' \
| xargs -P50 --replace /opt/tools/bin/restrict_resources -T1 \
-- vncconfig -display {} -get desktop 2>/dev/null \
| grep "($USER)" \
| awk '{print $1}' \
| xargs -n1 xdpyinfo -display \
| egrep "^name|dimensions|depths"
As long as comments are relatively localised, it's fine. So I wouldn't put them at the top of the file (unless your pipeline was the first thing in the file, of course) or scribbled down on toilet paper and locked in your desk at work.
But the first thing I do when looking at a block is to look for comments immediately preceding the block. Even in C code, I don't comment every line, since the intent of comments is to mostly show the why and a high-level how.
#!/bin/bash
for pid in $HOME/.vnc/*.pid; do
tmp=${pid##*/}
disp=${tmp%.*}
xdpyinfo -display "$disp" | # commment here
egrep "^name|dimensions|depths"
done
I don't understand the need for vncconfig if all it does is append '(user)' which you subsequently remove for the call to xdpyinfo. Also, all those pipes take quite a bit of overhead, if you time your script vs mine I think you'll find the performance comparable if not faster.

Resources