List recursively all files on sftp - bash

I'd like to write bash script to recursively list all files (with fullpaths) on sftp and interact with paths locally afterwards (so only thing for what sftp is needed is getting the paths). Unfortunately the "ls -R" doesn't work there.
Any idea how to do that with some basic POC would be really appreciated
Available commands:
bye Quit sftp
cd path Change remote directory to 'path'
chgrp grp path Change group of file 'path' to 'grp'
chmod mode path Change permissions of file 'path' to 'mode'
chown own path Change owner of file 'path' to 'own'
df [-hi] [path] Display statistics for current directory or
filesystem containing 'path'
exit Quit sftp
get [-Ppr] remote [local] Download file
help Display this help text
lcd path Change local directory to 'path'
lls [ls-options [path]] Display local directory listing
lmkdir path Create local directory
ln [-s] oldpath newpath Link remote file (-s for symlink)
lpwd Print local working directory
ls [-1afhlnrSt] [path] Display remote directory listing
lumask umask Set local umask to 'umask'
mkdir path Create remote directory
progress Toggle display of progress meter
put [-Ppr] local [remote] Upload file
pwd Display remote working directory
quit Quit sftp
rename oldpath newpath Rename remote file
rm path Delete remote file
rmdir path Remove remote directory
symlink oldpath newpath Symlink remote file
version Show SFTP version
!command Execute 'command' in local shell
! Escape to local shell
? Synonym for help

This recursive script does the job:
#!/bin/bash
#
URL=user#XXX.XXX.XXX.XXX
TMPFILE=/tmp/ls.sftp
echo 'ls -1l' > $TMPFILE
function handle_dir {
echo "====== $1 ========="
local dir=$1
sftp -b $TMPFILE "$URL:$dir" | tail -n +2 | while read info; do
echo "$info"
if egrep -q '^d' <<< $info; then
info=$(echo $info)
subdir=$(cut -d ' ' -f9- <<< $info)
handle_dir "$dir/$subdir"
fi
done
}
handle_dir "."
fill URL with the sftp server data.

I scan the whole Internet and find a great tool sshfs. Mount the remote directory tree through SSHFS. SSHFS is a remote filesystem that uses the SFTP protocol to access remote files.
Once you've mounted the filesystem, you can use all the usual commands without having to care that the files are actually remote.
sshfs helps me a lot, may give you help, too.
mkdir localdir
sshfs user#host:/dir localdir
cd localdir
find . -name '*'

Related

Create local directory when dealing with remote servers

I am copying files from a FTP server to the file system of my cluster. The script is executed on the cluster.
#!/bin/sh
HOST='0.0.0.0' # Host IP
ftp -i -nv $HOST <<END_SCRIPT
quote USER $FTPUSER
quote PASS $FTPPASS
binary
cd /FTPDIR/path/to/data/
mkdir -p /home/admin/path/to/data/
lcd /home/admin/path/to/data/
I added the mkdir -p /home/admin/path/to/data/ command in order to create the directory in my cluster in case it does not exists. However, the script created a directory named -p in the FTP /FTPDIR/path/to/data/ dir.
What would be the command to create it in the cluster?
The mkdir you are calling here is not the mkdir that exists on your system (or on the target system either). It is the mkdir that FTP provides. And that mkdir does not have the -p option.
You have to create each directory one by one, no shortcut.

Bash Script that works with Jenkins to move into a specific directory and remove the oldest folder in that directory

I have a devbox that I ssh into as the Jenkins user, and as the title says, I want to run a bash script that will move to a specific directory and remove the oldest directory. I know the location of the specific directory.
For example,
ssh server [move/find/whatever into home/deploy and find the oldest directory in deploy and delete it and everything inside it]
Ideally this is a one-liner. Not sure how to run multiple lines while sshing as a part of a Jenkins task. I read some Stack Overflow posts on them, but don't understand it. Specifically 'here documents'.
The file structure would look like home/deploy and inside the deploy directory has 3 folders: oldest, new, and newest. It should pick out the oldest (because of it's creation date, and rm -rf it)
I know this task removes the oldest directory:
rm -R $(ls -lt | grep '^d' | tail -1 | tr " " "\n" | tail -1)
Is there any way I can adjust the above code to remove a directory inside of a directory that I know?
You could pass a script to ssh. Save the below script as
#!/bin/bash
cd ~/deploy
rm -R $( ls -td */ | tail -n 1 )
delete_oldest.sh and pass it to ssh like below
ssh server -your-arguments-here < delete_oldest.sh
Edit:
If you wish to place the script on the remote machine, first you could
copy the script from the local machine to the remote machine to your
home folder using scp like this :
scp delete_oldest.sh your_user_name#remotemachine:~
Then you can do something like :
ssh your_user_name#remotemachine './delete_oldest.sh'
'./delete_oldest.sh' assumes that you're currently at your home folder on the remote machine which will be the case when you use ssh, as the default landing folder will always be the home folder.
Please try it with a test folder before you proceed.

rsync files from remote server to local, and also create local directories

I have a server that has files on /files/category/specific/*.
On my local I have a ~/files directory. What I want to do is pull /files/category/specific/* onto my local into ~/files/something/whatever. But something/whatever doesn't exist, I want rsync to create these local directories as well.
How can I do this with rsync?
Try this:
rsync -a --relative user#remote:/files/category/specific/* ~/files
Or You can use ssh to create a directory beforehand or use two rsync commands like this
rsync -a --rsync-path=”mkdir -p ~/files/whatever/whatever/ && rsync” user#remote:/files/category/specific/* $source

Bash script to copy source catalog to two destination catalogs, verify and delete source if successful

OS: OSX mountain lion
I am trying to write a script that does the following
Check if file1 exist on destination 1 (bitcasa)
if exist then copy source folder to destination 1
if file does not exist find bitcasa process and kill it then wait 60sec then start bicasa.
try again (loop?) #bitcasa sometimes stops working and have to be restarted.
Check if file2 exist on destination 2 (nfs share)
if exist then copy source folder to destination 1
if file does not exist try to mount nfs share.
try again (loop?)
verify copied files
if files copied successfully delete source files
I only want the script to try a few times, if it ant ping the nas host it should give up and try the next time the script runs. I want to run the script every 2h. crontab seam to have been removed in mountain lion.
When I write this down I realize it is a bit more complicated than I first thought.
First regarding mount a nfs share, in OsX if you eject a mounted nfs share the folder in /Volumes gets removed. What is the best way to make sure a nfs share i always mounted if the nas is available? This might be handled outside the script?
If i manually mount the nfs share I will need to create /Volumes/media and this will result in that if I use the gui to mount the share will use /Volumes/media-1/ sins /Volumes/media vill already exist.
Regarding killing a process by name sins I cant know the PID, I tried with linux command i found:
kill ps -ef | grep bitcasa | grep -v grep | awk ‘{print $2}’ this did not work.
I have no idea how to check if all files were successfully copied, maybe rsync can take care of this?
I have started with this (not tested)
#check if bitcasa is running (if file exist)
if [ -f /Volumes/Bitcasa\ Infinite\ Drive/file.ext ]
then
rsync -avz /Users/username/source /Volumes/Bitcasa\ Infinite\ Drive/destination/
else
#Bitcasa might have stopped, check if process i running, kill if it is, then start bitcasa
fi
#Check if nfs share is mounted (if file exist)
if [ -f /Volumes/media/file.ext ]
then
rsync -avz /Users/username/source /Volumes/media/
fi
else
#nfs share (192.168.1.106:/media/) need to be mounted to /Volumes/media
I will do some more work on it myself but I know I will need help.
Or am I doing this way to complicated? maybe a backup program can do this?
For your kill ... ps problem, you can use killall, which kills all processes having a given name
killall bitcasa
or see man ps and use a user defined format, which simplifies the selection
ps -o pid,comm | awk '/bitcasa/ { print $1; }' | xargs kill
For the nas, if you can log into it and install rsync and ssh (or have it already installed), you don't need to mount anything. You can just give 192.168.1.106:/media/ as the destination to rsync and rsync will do everything necessary.
In any case, first check and mount if necessary and then start rsync when everything is set up properly not the other way round
if [ ! -f "/Volumes/Bitcasa Infinite Drive/file.ext" ]; then
# kill bitcasa, restart bitcasa
fi
rsync -avz /Users/username/source "/Volumes/Bitcasa Infinite Drive/destination/"
same for nas
if [ ! -f "/Volumes/media/file.ext" ]; then
# mount nas nfs share
fi
rsync -avz /Users/username/source "/Volumes/media/"
or if you have rsync and ssh on your nas, just
rsync -avz /Users/username/source 192.168.1.106:/media/

Bash script to scp newest file in a directory on a remote server

Ok so I kinda know how to do this locally with a find then cp command, but don't know how to do the same remotely with scp.
So know this:
scp -vp me#server:/target/location/ /destination/dir/.
That target directory is going to be full of database backups, how can I tell it to find the latest backup, and scp that locally?
remote_dir=/what/ever
dst=remote-system.host.name.com
scp $dst:`ssh $dst ls -1td $remote_dir/\* | head -1` /tmp/lastmod
Write a script on the remote side that uses find to find it and then cat to send it to stdout, then run:
ssh me#server runscript.sh > localcopy

Resources