Copy file from remote machine when it changes - bash

I'm using this script to detect when a file changes in a remote machine and copy it when it does:
interval=5
target_file=/home/sergioro/file
last_modified=0
while :; do
last_modified_new=$(ssh jh 2>/dev/null stat -c %Y $target_file)
if [ $last_modified_new -gt $last_modified ]; then
last_modified=$last_modified_new
rsync -az jh:$target_file log/ 2>/dev/null
fi
sleep $interval
done &
This works but I would prefer to perform a persistent ssh connection instead of making multiple ssh calls. Is there a tool or SSH option to keep the ssh connection open and copy the file if changes? Thanks.

Related

Retrieve files from remote server using scp in crontab

I am trying to retrieve files from a remote server to my local PC using a cron job. However my script has to wait until the files are available on the remote server. From code pieces I gathered here and there, I came up with the code below
#!/bin/bash
year=$(date +%Y)
month=$(date +%m)
day=$(date +%d)
hour="00"
ssh-keygen # I suspect this line and the one below should be done once and not in the script.
ssh-copy-id _lms_2023
cd ${HOME}/ModelOutput_LMS/WRF_OUTPUT/tmp
cd ${HOME}/ModelOutput_LMS/WRF_OUTPUT/tmp
goto GOTO_1
if ssh lmshpc#41.203.191.69 "test -e /${HOME}/DA/OUTPUT/"$year$month$day$hour"/noda/graphics/*.png"; then
scp lmshpc#41.203.191.69:${HOME}/DA/OUTPUT/"$year$month$day$hour"/noda/graphics/*.png .
if [ $? -eq 0 ]; then
exit
fi
else
sleep 30
GOTO_1
fi
I want the script to keep checking until the files are available and downloaded. The above script gives the errors below.
Any assistance will be appreciated.
/usr/bin/ssh-copy-id: ERROR: ssh: Could not resolve hostname xxxxxxxxxx: Name or service not known
./cplocalfromRemote2.sh: line 14: goto: command not found
host's password:
A simpler way to do this is by using rsync instead of scp.
rsync checks/compares for new files automatically and copies only those files which are new/changed/modified or are not present in your destination location.
You can use rsync instead of scp in the script and set the script in cron to run every minute so that it gets the files which are newly added to the remote server.
The script:
#!/bin/bash
year=$(date +%Y)
month=$(date +%m)
day=$(date +%d)
hour="00"
cd ${HOME}/ModelOutput_LMS/WRF_OUTPUT/tmp
sshpass -p'your_password' rsync -avh --progress lmshpc#41.203.191.69:${HOME}/DA/OUTPUT/"$year$month$day$hour"/noda/graphics/*.png .
Set up a cronjob that runs the script every minute.
* * * * * /path/to/above/script.sh
EDIT: I have changed the script to add host password for rsync in non-interactive mode.
Hope this helps!

SFTP remove files from remote with local script

I have a local script, i want to remove a file from the remote via the local script.
I tried the following way below but it does not work. I have permissions to delete in the remote.
#!bin/sh
USER=test_user
HOST=xx.xx.xxx.xx
REMOTE_DIR=/somedirectoryinremote
while true
do
sftp $USER#$HOST:$REMOTE_DIR "rm -f $REMOTE_DIR/*.txt"
# sftp rm $USER#$HOST:$REMOTE_DIR/*.txt <- tried this but does not work too.
sleep 1800
done
done
Try :-
echo "rm $REMOTE_DIR/*.txt" |sftp $USER#$HOST:$REMOTE_DIR
If you can sftp, can you not run ssh? ssh will be more simple.
ssh $USER#$HOST "rm -f $REMOTE_DIR/*.txt"

Unix commands not working in remote server when executed from local server?

I have file test2.txt which contains name of the files present in remote server. However I have placed this script in my local server along with this file test2.txt.
I want to execute this script on local server to find out the files which are mentioned in text2.txt are present in remote server or not. If they exists I want to move
them in a specific directory in the remote server itself. However this code not working in remote server after establishing the connection with remote server i.e. after
for i in $(ssh -q -o "StrictHostKeyChecking no" username#hostname cat apps/dir/test2.txt);
Below is the snippet
#!/bin/bash
for i in $(ssh -q -o "StrictHostKeyChecking no" username#hostname cat apps/dir/test2.txt);
do
if [ ! -d $BACKUP ]; then
mkdir BACKUP
fi
file_exists=find /apps/dir/ -type f -name "$i" -print
cp apps/dir/$file_exists BACKUP
echo "The path is: $file_exists"
done

Getting continue behavior (not redownloading files already present) using lftp.

So, I have a script which downloads stuff from a seedbox. It works great for new files which are in the remote server and then mirrored on my local server. The problem is that when I want, for example, to remove unnecessary files, running the script again re-downloads the same file(s) again. I tried going into the man pages of mirror but it wasn't helpful. Here is the script which mirrors the files:
#!/bin/bash
login=XXXX
pass=XXXXXX
host=XXXXX
remote_dir=/files/
local_dir=/home/XXX/XXX
trap "rm -f /tmp/seedroots.lock" SIGINT SIGTERM
if [ -e /tmp/seedroots.lock ]; then
echo "Synctorrent is running already."
exit 1
else
touch /tmp/seedroots.lock
lftp -p 21 -u $login,$pass $host << EOF
set ftp:ssl-allow no
set mirror:use-pget-n 5
mirror -c -P5 --log=synctorrents.log $remote_dir $local_dir
EOF
rm -f /tmp/seedroots.lock
exit 0
fi
Is there an option for mirror which I am missing that doesn't re-download the locally deleted file(s) again?
The mirror command in lftp has a --continue flag which will result in the behavior you want.
You should give a try to my version of your script (not tested) :
#!/bin/bash
login=XXXX
pass=XXXXXX
host=XXXXX
remote_dir=/files/
local_dir=/home/XXX/XXX
files=$local_dir/*
trap "rmdir /tmp/seedroots.lock" 0 1 2 3 15
if [[ -d /tmp/seedroots.lock ]]; then
echo "Synctorrent is running already."
exit 1
else
mkdir /tmp/seedroots.lock
lftp -p 21 -u $login,$pass $host << EOF
set ftp:ssl-allow no
set mirror:use-pget-n 5
mget $files
EOF
fi
What it does :
I build a local list of files, and, subsequently, mget all these files on the ftp server with the variable $files.
I replaced the lock file with a dir : search web about atomicity.
Files are not atomic whereas directories are.
The trap runs on normal exit and other signals
If you are using bash, [[ ]] tests are more powerfull.
Indentation is not just an option ;)
If you are just leeching files (not seeding), you can use lftp mirror with --Remove-source-files option to remove files at source after transfer (so no duplicate, re-downloads).

bash: check if remote file exists using scp

I am writing a bash script to copy a file from a remote server, to my local machine. I need to check to see if the file is available, so I can take an alternative action if it is not there.
I know how to test if a local file exists, however, using scp complicates things a bit. Common sense tells me that one way would be to try to scp the file anyhow, and check the return code from the scp command. Is this the correct way to go about it?
If yes, how do I test the return code from the scp invocation?
using ssh + some shell code embedded in the cmd line; use this method when you need to take a decision before the file transfer will fail;
ssh remote-host 'sh -c "if [ -f ~/myfile ] ; then gzip -c ~/myfile ; fi" ' | gzip -dc > /tmp/pkparse.py
if you want to transfer directories you may want to "tar"-it first
if you want to use scp you can check the return code like this:
if scp remote-host:~/myfile ./ >&/dev/null ; then echo "transfer OK" ; else echo "transfer failed" ; fi
it really depends on when its important for you to know if the file is there or not; before the transfer starts (use ssh+sh) or after its finished.
well, since you can use scp you can try using ssh to list and see if the file is their or not before proceeding.

Resources