Script for directory mirroring with inotifywait and ssh - bash

I have a script that try to mirror a specific directory from a local server to a remote one. It looks like that:
inotifywait -mr --format '%w%f' -e close_write -e moved_to -e delete /mydir | \
while read FILECHANGE
do
if [ -f $FILECHANGE ]
then
rsync --bwlimit=4096 --progress --relative -vrae 'ssh -p 22' $FILECHANGE $REMOTEHOST:/
else
ssh -p 22 $REMOTEHOST "rm $FILECHANGE"
fi
done
In case of multiple create of files, as for example a touch command:
touch 1 2 3
The 3 files are well transfered.
But if I delete several files at once:
rm -f 1 2 3
Only the first 1 is deleted.
If I replace the ssh command by just an echo $FILECHANGE, the 3 files are well displayed in the console. So it seems the problem come from the ssh command, but I can't explain why and solve it.
Anyone as an idea?

Well, I found the issue: it seems that the ssh command was eating the output of the inotifywait command when run. So, to prevent that, I add the 0<&- redirection after the ssh, to close the stdin.
inotifywait -mr --format '%w%f' -e close_write -e moved_to -e delete /mydir | \
while read FILECHANGE
do
if [ -f $FILECHANGE ]
then
rsync --bwlimit=4096 --progress --relative -vrae 'ssh -p 22' $FILECHANGE $REMOTEHOST:/
else
ssh -p 22 $REMOTEHOST "rm $FILECHANGE" 0<&-
fi
done
Now it works.

Related

Weird output observed on executing ssh commands remotely over ProxyCommand

Team, I have two steps to perform:
SCP a shell script file to remote ubuntu linux machine
Execute this uploaded file on remote ubuntu linux machine over SSH session using PROXYCommand because I have bastion server in front.
Code:
scp -i /home/dtlu/.ssh/key.key -o "ProxyCommand ssh -i /home/dtlu/.ssh/key.key lab#api.dev.test.com -W %h:%p" /home/dtlu/backup/test.sh lab#$k8s_node_ip:/tmp/
ssh -o StrictHostKeyChecking=no -i /home/dtlu/.ssh/key.key -o 'ProxyCommand ssh -i /home/dtlu/.ssh/key.key -W %h:%p lab#api.dev.test.com' lab#$k8s_node_ip "uname -a; date;echo "Dummy123!" | sudo -S bash -c 'echo 127.0.1.1 \`hostname\` >> /etc/hosts'; cd /tmp; pwd; systemctl status cachefilesd | grep Active; ls -ltr /tmp/test.sh; echo "Dummy123!" | sudo -Sv && bash -s < test.sh"
Both calls above are working fine. I am able to upload test.sh and also its running but what is bothering me is during the process am observe weird output being thrown out.
output:
/tmp. <<< expected
[sudo] password for lab: Showing one
Sent message type=method_call sender=n/a destination=org.freedesktop.DBus object=/org/freedesktop/DBus interface=org.freedesktop.DBus member=Hello cookie=1 reply_cookie=0 error=n/a
Root directory /run/log/journal added.
Considering /run/log/journal/df22e14b1f83428292fe17f518feaebb.
Directory /run/log/journal/df22e14b1f83428292fe17f518feaebb added.
File /run/log/journal/df22e14b1f83428292fe17f518feaebb/system.journal added.
So, I don't want /run/log/hournal and other lines which don't correspond to my command in sh.
Consider adding -q to the scp and ssh commands to reduce the output they might produce. You can also redirect stderr and stdout to /dev/null as appropriate.
For example:
{
scp -q -i /home/dtlu/.ssh/key.key -o "ProxyCommand ssh -i /home/dtlu/.ssh/key.key lab#api.dev.test.com -W %h:%p" /home/dtlu/backup/test.sh lab#$k8s_node_ip:/tmp/
ssh -q -o StrictHostKeyChecking=no -i /home/dtlu/.ssh/key.key -o 'ProxyCommand ssh -i /home/dtlu/.ssh/key.key -W %h:%p lab#api.dev.test.com' lab#$k8s_node_ip "uname -a; date;echo "Dummy123!" | sudo -S bash -c 'echo 127.0.1.1 \`hostname\` >> /etc/hosts'; cd /tmp; pwd; systemctl status cachefilesd | grep Active; ls -ltr /tmp/test.sh; echo "Dummy123!" | sudo -Sv && bash -s < test.sh"
} >&/dev/null

rsync with ssh without using credentials stored in ~/.ssh/config

I have a script that transfers files. Everytime I run it It needs to connect to a different host. That's why I'm adding the host as parameter.
The script is executed as: ./transfer.sh <hostname>
#!/bin/bash -evx
SSH="ssh \
-o UseRoaming=no \
-o UserKnownHostsFile=/dev/null \
-o StrictHostKeyChecking=no \
-i ~/.ssh/privateKey.pem \
-l ec2-user \
${1}"
files=(
file1
file2
)
files="${files[#]}"
# this works
$SSH
# this does not work
rsync -avzh --stats --progress $files -e $SSH:/home/ec2-user/
# also this does not work
rsync -avzh --stats --progress $files -e $SSH ec2-user#$1:/home/ec2-user/
I can properly connect with the ssh connection stored in $SSH, but the rsync connection attempts fails because of the wrong key:
Permission denied (publickey).
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(226) [sender=3.1.2]
What would be the correct syntax for the rsync connection?
Write set -x before the rsync line and watch how the arguments are expanded. I believe it will be wrong.
You need to enclose the ssh command with arguments (without hostname) into the quotes, otherwise the arguments will get passed to the rsync command and not to the ssh.
My solution after Jakuje pointed me in the right direction:
#!/bin/bash -evx
host=$1
SSH="ssh \
-o UseRoaming=no \
-o UserKnownHostsFile=/dev/null \
-o StrictHostKeyChecking=no \
-i ~/.ssh/privateKey.pem \
-l ec2-user"
files=(
file1
file2
)
files="${files[#]}"
# transfer all in one rsync connection
rsync -avzh --stats --progress $files -e "$SSH" $host:/home/ec2-user/
# launch setup script
$SSH $host ./setup.sh

inotifywait adding multiple files at ones

this shell script should add everything put in the folder to transmission. With one folder it works fine, but when i add more then one folder at the same moment it ignores the second one.
while true;
do
file=$(inotifywait -e moved_to --format %f /srv/watchfolderfilme)
file="/srv/watchfolderfilme/$file"
transmission-create -o $file.torrent -s 16384 -t http://0.0.0.0:6969/announce $file
mv $file /srv/downloads
chmod 0777 $file.torrent
cp $file.torrent /srv/newtorrentfiles
mv $file.torrent /srv/watchfoldertorrents
done
Rethough my solution and found a better one that works fine for multiple adds
inotifywait -m /srv/watchfolderfilme -e create -e moved_to |
while read path action file; do
# echo "The file '$file' appeared in directory '$path' via '$action'"
chmod 0777 $path$file
transmission-create -o /srv/newtorrentfiles/$file.torrent -s 16384 -t http://0.0.0.0:6969/announce $path$file
mv $path$file /srv/downloads
chmod 0777 /srv/newtorrentfiles/$file.torrent
cp /srv/newtorrentfiles/$file.torrent /srv/watchfoldertorrents
done

how can know ssh is disconected and retry with bash script

I'm using reverse ssh for connecting to remote client , Operator run reverse one time and leave client system
how can i write bash script , when reverse ssh disconnected from server retry to connect to server (ssh)
Use autossh. Autossh "automatically restart[s] SSH sessions and tunnels"
sudo apt-get install autossh
I use autossh to to keep open reverse tunnel that I depend on. It works very well, even with long periods of lost connection.
Here is the script I use to create the tunnel:
#!/bin/bash
AUTOSSH_GATETIME=0
export AUTOSSH_GATETIME
autossh -f -N -R 8022:localhost:22 username#host -o "ServerAliveInterval 45" -o "ServerAliveCountMax 2"
I execute this script at boot with this cronjob:
#reboot /home/scripts/./persistent-tunnel.sh
If you simply want to retry a command until it succeeds, you can use this pattern:
while ! ssh [...]
do
echo "Command failed, retrying..." >&2
done
I have a slightly different method.
My method always tries to reconnect you if you have a dirty disconnection: '~.' or 'Connection closed by remote host.'
But if you disconnect with 'CRTL+D' or with 'exit' it just disconnects and show you some info of the connections.
I have a slightly different method.
My method always tries to reconnect you if you have a dirty disconnection: '~.' or 'Connection closed by remote host.'
But if you disconnect with 'CRTL+D' or with 'exit' it just disconnects and show you some info of the connections.
#/bin/bash
if [ -z "$1" ]
then
echo '''
Please also provide ssh connection details.
'''
exit 1
fi
retries=0
repeat=true
today=$(date)
while "$repeat"
do
((retries+=1)) &&
echo "Try number $retries..." &&
today=$(date) &&
ssh "$#" &&
repeat=false
sleep 5
done
echo """
Disconnected sshx after a successful login.
Total number of tries = $retries
Connected at:
$today
"""
You might want to take a look into ssh options ServerAliveInterval, ServerAliveCountMax and TCPKeepAlive because sometimes your line dies without making this obvious, let me demonstrate:
#!/bin/sh
while true; do
ssh -T user#host \
-o IdentityFile=~/.ssh/tunnel \
-o UserKnownHostsFile=~/.ssh/known_hosts.tunnel \
pkill -f "^sshd:\ user\ \ \ \ $" # needs to be edited for nearly every case
sleep 2
ssh -T -N user#host \
-o IdentityFile=~/.ssh/tunnel \
-o UserKnownHostsFile=~/.ssh/known_hosts.tunnel \
-o Batchmode=yes \
-o ExitOnForwardFailure=yes \
-o ServerAliveCountMax=1 \
-o ServerAliveInterval=60 \
-o LocalForward=127.0.0.1:2501=127.0.0.1:25 \
-o RemoteForward=127.0.0.1:2501=127.0.0.1:25
sleep 60
done
You can use netstat -ntp | grep ":22" or ss -ntp | grep ":22" to see established connections to ssh port, then use grep to filter the ip address you're looking for. If you don't find a connection then reconnect the tunnel.
Use autossh if it works on your version of Linux. It did not on mine as it was an outdated Linux distribution for a custom NAS box.
The alternative is a simple bash script in crontab like this:
maintain_reverse_ssh_tunnel.sh
if ! netstat -planet |grep myserver_ip_or_name |grep ESTABLISHED > /dev/null; then
echo "REVERSE SSH DOWN - Restarting the tunnels"
ssh -fN -R 32999:localhost:22 -R 28080:localhost:80 myusername#myserver_ip_or_name
fi
Replace myusername and myserver_ip_or_name with those of your user and server.
Then add an entry to crontab by typing crontab -e and adding the following line:
1 * * * * /path_to_my_script/maintain_reverse_ssh_tunnel.sh
Make sure to have the execute permissions on the script:
chmod 755 maintain_reverse_ssh_tunnel.sh

Why does sudo change a blocking command to a non-blocking command when used in a while-loop?

Or: How do I prevent a sudo'ed rsync from infinite firing in a while-loop?
Because that's both what (feels like) is happening and I don't get it.
I am trying to set up a watch for syncing modified files, and it works fine. However, once I introduce the required sudo to the rsync command, a single inotify event causes the rsync command to fire indefinitely.
#!/usr/bin/env bash
inotifywait -m -r --format '%w%f' -e modify -e move -e create -e delete /var/test | while read line; do
sudo rsync -ah --del --progress --stats --update "$line" "/home/test/"
done
When you edit a file, rsync goes in rapid fire mode. But lose the sudo (and use folders to which you have permissions, of course) and the script works as expected.
Why is this?
How do I make this work correctly with the sudo command?
I have the answer, found it by experimenting. But I have no idea why this is. Please someone tell me why sudo in this loop breaks the expected blocking behavior.
Since sudo breaks the script, we can distance ourselves from sudo by using a wrapper:
This is correct:
inotifywait -m -r --format '%w%f' -e modify /var/test | while read line; do
sh -c 'sudo rsync -ah "$line" "/home/test/"'
done
Weird thing is: Pull the sudo out of the wrapper and we have the old faulty behavior again. Very strange.
This is wrong:
inotifywait -m -r --format '%w%f' -e modify /var/test | while read line; do
sudo sh -c 'rsync -ah "$line" "/home/test/"'
done

Resources