Is "xargs" on MacOS not the same as linux? - bash

For the following command:
docker ps -a -q | xargs -r docker kill
I get this error:
xargs: illegal option -- r
What would be the MacOS equivalent of the above command?

The equivalent is simply docker ps -a -q | xargs docker kill.
-r (aka. --no-run-if-empty) is only necessary on GNU xargs because it will always run the command at least once by default, even if there is no input; -r disables this. BSD xargs does not have this behavior, so there's no need to disable it.

Related

How to run a command like xargs on a grep output of a pipe of a previous xargs from a command in Bash

I'm trying to understand what's happening here out of curiosity, even though I can just copy and paste the output of the terminal to do what I need to do. The following command does not print anything.
ls /opt/local/var/macports/registry/portfiles -1 | sed 's/-.*//g' | sort -u | parallel "sudo port -N install" {} 2>&1 | grep -Po "Use '\K.*(?=')" | parallel "{}"
The directory I call ls on contains a bunch of filenames starting with the string I want to extract that ends at the first dash (so stringexample-4.2009 pipes stringexample into parallel (like xargs but to run each line separately). After running the command sudo port install <stringexample>, I get error outputs like so:
Unable to activate port <stringexample>. Use 'port -f activate <stringexample>' to force the activation.
Now, I wish to run port -f activate <stringexample>. However, I cannot seem to do anything with the output port -f activate gettext that I get to the terminal.
I cannot even do ... | grep -Po "Use '\K.*(?=')" | xargs echo or ... | grep -Po "Use '\K.*(?=')" >> commands_to_run.txt (the output stream to file only creates an empty file), despite the shorter part of the command:
ls /opt/local/var/macports/registry/portfiles -1 | sed 's/-.*//g' | sort -u | parallel "sudo port -N install {}" 2>&1 | grep -Po "Use '\K.*(?=')"
printing the commands to the terminal. Why does the pipe operator not work here? If the commands I wish to run are outputting to the terminal, surely there's got to be a way to capture them.

How do I not show the processes that I can't kill with 'kill + [pid number]' command?

I was working on a project "make a task manager in linux" at school
I used ps -u [username] -o stat,cmd --sort=-pid | awk '{print $2$3$4}' command to get cmd names from the ps command
If I use this command, I see the part of the result like this :
awk{print$2$3$4}
ps-u[username]
when I try to terminate those process using the pid of each process, it won't terminate them because their PID doesn't exist.
How could I not show those awk{print$2$3$4} and ps-u[username] ???
I couldn't think of any idea
ps -u [username] -o stat,cmd --sort=-pid | awk '{print $2$3$4}'
You can't kill them because they were only alive while the commands were running, which was the same command you used to generate that output.
There's a few ways you can suppress these. I think the easiest would be to filter them out in your awk script.:
ps -u [username] -o stat,cmd --sort=-pid | awk '$2!="awk" && $2!="ps"{print $2$3$4}'
JNevill's solution excludes every running awk or ps process. I think it's better to exclude processes on tty. Also, you aren't getting complete commands with how you use awk. I (kind of) solved it using sed.
$ ps -u $USER -o stat,tty,cmd --sort=-pid | grep -v `ps -h -o tty $$` | sed -r 's/.* (.*)$/\1/'
You can test it with the following command. I opened man ps in another terminal.
$ ps -u $USER -o stat,tty,cmd --sort=-pid | grep -v `ps -h -o tty $$` | grep -E '(ps|grep)'
S+ pts/14 man ps
The downside is, besides excluding ps and grep, it excludes your application as well.

gitlab-ci: remote shell execution with variable expansion

Within a gitlab-ci job, I want to create a directory (on a remote server) with the job id and perform some actions more or less as follows:
- ssh user#server mkdir -p /user/$CI_JOB_ID
- ssh user#server <perform_some_action_that_creates_several_zip_files>
- LAST_MODIFIED_FILE=$(ssh user#server bash -c 'find /user/$CI_JOB_ID -iname "*.zip" | tail -n 1 | xargs readlink -f')
The directory does get created and the action that creates several zips works out.
However, the last command that I use for getting the last modified/created .zip does not work, because $CI_JOB_ID does not seem to get expanded.
Any suggestions?
This issue is due to your ssh call. The way you do it now, you are mixing contexts :
ssh user#server bash -c 'find /user/$CI_JOB_ID -iname "*.zip" | tail -n 1 | xargs readlink -f'
bash -c 'my_commands' : you use simple quotes, so ssh will execute exactly the instruction my_commands. In your case, it will try to find the remote value for $CI_JOB_ID, instead of using the local one.
ssh user#server bash -c "my_commands" : withssh, the commands to execute on the remote shell are sent as a single string. Thus, if you were to run ssh with this quotation, it would try to run "bash -c find ... | tail ... | xargs ...". Here, only find is run through bash -c.
In your case, simply writing directly the following statement should do the trick :
ssh user#server "find /user/$CI_JOB_ID -iname '*.zip' | tail -n 1 | xargs readlink -f"
Otherwise, if you want to keep using the bash -c syntax, you'd have to escape the quotation so that it is propagated to the remote machine :
ssh user#server bash -c \"find /user/$CI_JOB_ID -iname '*.zip' | tail -n 1 | xargs readlink -f\"

Fish shell input redirection from subshell output

When I want to run Wireshark locally to display a packet capture running on another machine, this works on bash, using input redirection from the output of a subshell:
wireshark -k -i <(ssh user#machine "sudo dumpcap -P -w - -f '<filter>' -i eth0")
From what I could find, the syntax for similar behavior on the fish shell is the same but when I run that command on fish, I get the Wireshark output on the terminal but can't see the Wireshark window.
Is there something I'm missing?
What you're using there in bash is process substitution (the <() syntax). It is a bash specific syntax (although zsh adopted this same syntax along with its own =()).
fish does have process substitution under a different syntax ((process | psub)). For example:
wireshark -k -i (ssh user#machine "sudo dumpcap -P -w - -f '<filter>' -i eth0" | psub)
bash | equivalent in fish
----------- | ------------------
cat <(ls) | cat (ls|psub)
ls > >(cat) | N/A (need to find a way to use a pipe, e.g. ls|cat)
The fish equivalent of <() isn't well suited to this use case. Is there some reason you can't use this simpler and more portable formulation?
ssh user#machine "sudo dumpcap -P -w - -f '<filter>' -i eth0" | wireshark -k -i -

Why does sudo change a blocking command to a non-blocking command when used in a while-loop?

Or: How do I prevent a sudo'ed rsync from infinite firing in a while-loop?
Because that's both what (feels like) is happening and I don't get it.
I am trying to set up a watch for syncing modified files, and it works fine. However, once I introduce the required sudo to the rsync command, a single inotify event causes the rsync command to fire indefinitely.
#!/usr/bin/env bash
inotifywait -m -r --format '%w%f' -e modify -e move -e create -e delete /var/test | while read line; do
sudo rsync -ah --del --progress --stats --update "$line" "/home/test/"
done
When you edit a file, rsync goes in rapid fire mode. But lose the sudo (and use folders to which you have permissions, of course) and the script works as expected.
Why is this?
How do I make this work correctly with the sudo command?
I have the answer, found it by experimenting. But I have no idea why this is. Please someone tell me why sudo in this loop breaks the expected blocking behavior.
Since sudo breaks the script, we can distance ourselves from sudo by using a wrapper:
This is correct:
inotifywait -m -r --format '%w%f' -e modify /var/test | while read line; do
sh -c 'sudo rsync -ah "$line" "/home/test/"'
done
Weird thing is: Pull the sudo out of the wrapper and we have the old faulty behavior again. Very strange.
This is wrong:
inotifywait -m -r --format '%w%f' -e modify /var/test | while read line; do
sudo sh -c 'rsync -ah "$line" "/home/test/"'
done

Resources