Why does sudo change a blocking command to a non-blocking command when used in a while-loop? - bash

Or: How do I prevent a sudo'ed rsync from infinite firing in a while-loop?
Because that's both what (feels like) is happening and I don't get it.
I am trying to set up a watch for syncing modified files, and it works fine. However, once I introduce the required sudo to the rsync command, a single inotify event causes the rsync command to fire indefinitely.
#!/usr/bin/env bash
inotifywait -m -r --format '%w%f' -e modify -e move -e create -e delete /var/test | while read line; do
sudo rsync -ah --del --progress --stats --update "$line" "/home/test/"
done
When you edit a file, rsync goes in rapid fire mode. But lose the sudo (and use folders to which you have permissions, of course) and the script works as expected.
Why is this?
How do I make this work correctly with the sudo command?

I have the answer, found it by experimenting. But I have no idea why this is. Please someone tell me why sudo in this loop breaks the expected blocking behavior.
Since sudo breaks the script, we can distance ourselves from sudo by using a wrapper:
This is correct:
inotifywait -m -r --format '%w%f' -e modify /var/test | while read line; do
sh -c 'sudo rsync -ah "$line" "/home/test/"'
done
Weird thing is: Pull the sudo out of the wrapper and we have the old faulty behavior again. Very strange.
This is wrong:
inotifywait -m -r --format '%w%f' -e modify /var/test | while read line; do
sudo sh -c 'rsync -ah "$line" "/home/test/"'
done

Related

Bash cron job on hpanel not locating directory

I have the following code on cron job, it runs but the code does not really do what it supposed to. It does not create the directory plus is does not do anything in the code. Please help check if the way I pointed to the directory is wrong.
#!/bin/bash
NAMEDATE=`date +%F_%H-%M`_`whoami`
NAMEDATE2=`date `
mkdir ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE -m 0755
mysqldump -u u3811*****_boss -p"*******" u3811*****_data | gzip ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/db.sql.gz
echo "This is the database backup for website.com on $NAMEDATE2" |
mailx -a ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/db.sql.gz -s "website.com Database attached" -- mail#gmail.com
chmod -R 0644 ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/*
exit 0
Your NAMEDATE variable needs to be modified a bit, as shown below, for more information about variables in bash see this link
NAMEDATE=$(date +%F_%H-%M"_"$(whoami))
When you issue the mkdir command you will need to pass the -p option to create the complete directory structure if it doesn't exists.
mkdir -p ~/home/u3811numbers/domains/website.com/public_html/cron/backup/files/$NAMEDATE -m 0755
Also, the ~ character on Linux based distributions is used as a shortcut for the home directory of the user that invokes it so, in the line below the result is /home//home/u3811*****/domains/website.com/public_html/cron/backup/files/2020-09-04_23-13_ you can read more about it in here
In you last command before the exit, you might need to pass a wildcard (*) to avoid removing the executable bit on the directory, see below
chmod -R 0644 ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/
The final version of your script will look something like this.
#!/bin/bash
NAMEDATE=$(date +%F_%H-%M"_"$(whoami))
NAMEDATE2=date
mkdir -p ~/home/u3811******/domains/website.com/public_html/cron/backup/files/$NAMEDATE -m 0755
mysqldump -u u3811*****_boss -p"******" u3811*****_data | gzip > ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/db.sql.gz
echo "This is the database backup for website.com on $NAMEDATE2" | mailx -a ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/db.sql.gz -s "website.com Database attached" -- mail#gmail.com
chmod -R 0644 ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/*
To debug a bash script you can always pass the -x flag for more information take a look at this article

Some Output Lost in Command Passed to SSH

I'm trying to use an ssh command to ssh to a server and run theuseradd command I passed to it. It seems like its running ok for the most part (no errors produced) but the hashed password in the /etc/shadow file is missing the salt (I believe that's the portion that's missing.).
I'm not sure if the quoting that is incorrect or not. But running this command manually on the server works fine, so I'm assuming its the expansion that's messed up.?
The command below is running inside a Bash script...
Command:
ssh user#$host "useradd -d /usr/local/nagios -p $(perl -e 'print crypt("mypassword", "\$6\$salt");') -g nagios nagios && chown -R nagios:nagios /usr/local/nagios"
*When I escape the double quotes inside the perl one-liner, I get the error:
Can't find string terminator '"' anywhere before EOF at -e line 1.
Usage: useradd [options] LOGIN
Any idea what I'm doing wrong here?
Instead of enclosing the entire command in double-quotes and making sure to correctly escape everything in it, it will be more robust to use single-quotes, and handle embedded single-quotes as necessary.
In fact there are no embedded single-quotes to handle,
only the embedded literal $ in the $6$salt.
ssh "user#$host" 'useradd -d /usr/local/nagios -p $(perl -e "print crypt(q{mypassword}, q{\$6\$salt});") -g nagios nagios && chown -R nagios:nagios /usr/local/nagios'
echo "useradd -d /usr/local/nagios -p $(perl -e 'print crypt("mypassword", "\$6\$salt");') -g nagios nagios && chown -R nagios:nagios /usr/local/nagios" > /tmp/tempcommand && scp /tmp/tempcommand root#server1:/tmp && ssh server1 "sh -x /tmp/tempcommand && finger nagios && rm /tmp/tempcommand"
In such cases I always prefer to have a local file on the local/remote server from which I execute the command set. Saves a lot of "quotes debugging time". What I am doing above is first to save the long one-liner to a file locally, "as is" and "as works" locally, copy it over with scp to the remote server and execute it there with the shell.
More secure way (no need to copy over the file). Again - save it locally and pass it to the remote bash with -s option :
echo "useradd -d /usr/local/nagios -p $(perl -e 'print crypt("mypassword", "\$6\$salt");') -g nagios nagios && chown -R nagios:nagios /usr/local/nagios" > /tmp/tempcommand && echo finger nagios >> /tmp/tempcommand && ssh server1 'bash -s' < /tmp/tempcommand

Update root crontab remotely for many systems by script

I am trying to update the crontab file of 1000+ systems using a for loop from jump host.
The below doesn't work.
echo -e 'pass365\!\n' | sudo -S echo 'hello' >> /var/spool/cron/root
-bash: /var/spool/cron/root: Permission denied
I do have (ALL) ALL in the sudoers file.
This is another solution;
echo 'pass365\!' | sudo -S bash -c 'echo "hello">> /var/spool/cron/root'
The below worked for me.
echo 'pass365\!' | sudo -S echo 'hello' | sudo -S tee -a /var/spool/cron/root > /dev/null
Problem 1: You are trying to send the password via echo to sudo.
Problem 2: You can't use shell redirection in a sudo command like that.
Between the two of these, consider setting up ssh public key authorization and doing
ssh root#host "echo 'hello' \>\> /var/spool/cron/root"
You may eventually get sudo working but it will be so much more pain than this.

Script for directory mirroring with inotifywait and ssh

I have a script that try to mirror a specific directory from a local server to a remote one. It looks like that:
inotifywait -mr --format '%w%f' -e close_write -e moved_to -e delete /mydir | \
while read FILECHANGE
do
if [ -f $FILECHANGE ]
then
rsync --bwlimit=4096 --progress --relative -vrae 'ssh -p 22' $FILECHANGE $REMOTEHOST:/
else
ssh -p 22 $REMOTEHOST "rm $FILECHANGE"
fi
done
In case of multiple create of files, as for example a touch command:
touch 1 2 3
The 3 files are well transfered.
But if I delete several files at once:
rm -f 1 2 3
Only the first 1 is deleted.
If I replace the ssh command by just an echo $FILECHANGE, the 3 files are well displayed in the console. So it seems the problem come from the ssh command, but I can't explain why and solve it.
Anyone as an idea?
Well, I found the issue: it seems that the ssh command was eating the output of the inotifywait command when run. So, to prevent that, I add the 0<&- redirection after the ssh, to close the stdin.
inotifywait -mr --format '%w%f' -e close_write -e moved_to -e delete /mydir | \
while read FILECHANGE
do
if [ -f $FILECHANGE ]
then
rsync --bwlimit=4096 --progress --relative -vrae 'ssh -p 22' $FILECHANGE $REMOTEHOST:/
else
ssh -p 22 $REMOTEHOST "rm $FILECHANGE" 0<&-
fi
done
Now it works.

wget and run/remove bash script in one line

wget http://sitehere.com/install.sh -v -O install.sh; rm -rf install.sh
That runs the script after download right and then removes it?
I like to pipe it into sh. No need to create and remove file locally.
wget http://sitehere.com/install.sh -O - | sh
I think you might need to actually execute it:
wget http://sitehere.com/install.sh -v -O install.sh; ./install.sh; rm -rf install.sh
Also, if you want a little more robustness, you can use && to separate commands, which will only attempt to execute the next command if the previous one succeeds:
wget http://sitehere.com/install.sh -v -O install.sh && ./install.sh; rm -rf install.sh
I think this is the best way to do it:
wget -Nnv http://sitehere.com/install.sh && bash install.sh; rm -f install.sh
Breakdown:
-N or --timestamping will only download the file if it is newer on the server
-nv or --no-verbose minimizes output, or -q / --quiet for no "wget" output at all
&& will only execute the second command if the first succeeds
use bash (or sh) to execute the script assuming it is a script (or shell script); no need to chmod +x
rm -f (or --force) the file regardless of what happens (even if it's not there)
It's not necessary to use the -O option with wget in this scenario. It is redundant unless you would like to use a different temporary file name than install.sh
You are downloading in the first statement and removing in the last statement.
You need to add a line to excute the file by adding :
./install.sh

Resources