I'm trying to rsync a large directory of around 200 GB from a server to my local external hard drive. I can ssh onto the server and see the directory fine. I can also cd into the external hard drive fine. When I try and rsync the file across, I don't get an error, but the last line of the rsync output is 'total size is 0 speedup is 0.00', and there are no files in the destination directory.
Here's how I ssh onto the server successfully:
ssh skgtmdf#live.rd.ucl.ac.uk
Here's my rsync command:
rsync -avrt -progress -e "ssh skgtmdf#live.rd.ucl.ac.uk:/mnt/gpfs/live/rd01__/ritd-ag-project-rd012x-smmca23/" "/Volumes/DUAL DRIVE/ONP/2022.08.10_results/"
And here's the rsync output:
sending incremental file list
drwxrwxrwx 65,536 2022/08/10 21:32:06 .
sent 57 bytes received 64 bytes 242.00 bytes/sec
total size is 0 speedup is 0.00
What am I doing wrong?
The way you have it quoted, the source path is part of the remote shell option (-e value) rather than a separate argument as it should be.
rsync -avrt -progress -e "ssh skgtmdf#live.rd.ucl.ac.uk:/mnt/gpfs/live/rd01__/ritd-ag-project-rd012x-smmca23/" "/Volumes/DUAL DRIVE/ONP/2022.08.10_results/"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is all part of the `-e` option value
This means rsync doesn't see that as a sync source at all, but just part of the command it'll use to connect to the remote system. I'm not sure why this doesn't lead to an error. In any case, the fix is simple: don't include ssh with the source path.
As I noticed later (see comments) the --progress option needs a double-dash or it'll be wildly misparsed. Fixing both of these things gives:
rsync -avrt --progress -e ssh "skgtmdf#live.rd.ucl.ac.uk:/mnt/gpfs/live/rd01__/ritd-ag-project-rd012x-smmca23/" "/Volumes/DUAL DRIVE/ONP/2022.08.10_results/"
In fact, since ssh is the default command for making a remote connection, you can leave off -e ssh entirely:
rsync -avrt --progress "skgtmdf#live.rd.ucl.ac.uk:/mnt/gpfs/live/rd01__/ritd-ag-project-rd012x-smmca23/" "/Volumes/DUAL DRIVE/ONP/2022.08.10_results/"
rsync -azve ssh user#host:/src/ target/
Normally, you don't need to wrap -e flag with ". It's probably messing the connection string
Related
I am intending to send a huge file around 1+GB over to the remote side using SFTP. However, it seems to work fine in interactive mode(when I sftp#xx.xx.xx.xx and enter the password manually, then I key in the put command). But when I run it in shell, it always timeout.
I have set the client and server ClientAliveTimeout settings at /etc/ssh/sshd_config but it still occurs.
Below is the linux script code
sshpass -p "password" sftp user#xx.xx.xx.xx << END
put <local file path> <remote file path>
exit
END
The transfer of files takes 10 min when using interactive mode
when run using script, the file was incomplete based on filesize.
Update: Current transfer during interactive mode shows the small files went through but the big file was stalled halfway during transfer.
I prefere lftp for such things
lftp -u user,passwd domain.tld -e "put /path/file; quit"
lftp can handle sftp too
open sftp://username:password#server.address.com
I'm trying to automate the download of a subdirectories in a directory. However, when executing the script, the last one or two directories cannot be found by the script - "No such file or directory". All others do fine and can be downloaded. This occurs for all directories I've tried this on which is strange to me. Why would it always not find the last two directories?
Can anyone help with this? Is it due to the loop? I've tried changing it to loop over the last ones only and this doesn't help. Or maybe it's due to the conversion of array=($l)?
Here's my script:
dirServer=/dir/to/location/in/server
dirLocal=/dir/to/location/in/local/pc
l=`ssh -t username#server 'ls' ${dirServer}`
#array of folders that should be copied to local machine
array=($l)
for folder in ${array[#]}
do
echo ${folder}
#if directory doesn't exist, creat it
mkdir -p ${dirLocal}${folder}
scp -r username#server:${dirServer}${folder}/analysis/ ${dirLocal}${folder}
done
Try change the following line including this "-o LogLevel=QUIET"
l=`ssh -o LogLevel=QUIET -t username#server 'ls' ${dirServer}`
Explanation:
That is coming from SSH. You see it because you gave the -t switch, which forces SSH to allocate a pseudo-terminal for the connection. Traditionally, SSH displays that message to make it clear that you are no longer interacting with the shell on the remote host, which is normally only a question when SSH has a pseudo-terminal allocated.
I've been struggling with this for a day now and can't figure out what I'm doing, I haven't been able to find anything with Google some hopefully someone here can help:
I'm trying to use rsync to sync a folder between my Ubuntu server and my window's machine using Cygwin. I issue:
$ rsync -av -e "ssh 10.0.0.28 -pxxxx -i /home/my_user/.ssh/id_rsa -l my_user" my_user#10.0.0.28:/backup/folder/ /backup/folder/
bash: 10.0.0.28: command not found
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: error in rsync protocol data stream (code 12) at io.c(226) [Receiver=3.1.1]
If I just extract the code in the double quotes and run it it correctly opens the ssh connection. Does anyone know what is going on here?
You probably should not specify an address and login in ssh command. Rsync will do it for you. Try
rsync -av -e "ssh -pxxxx -i /home/my_user/.ssh/id_rsa" my_user#10.0.0.28:/backup/folder/ /backup/folder
I am working on a script which will be used to transfer a file (using rsync) from a remote location and then perform some basic operations on the retrieved content.
When I initially connect to the remote location (not running an rsync daemon, I'm just using rsync to retrieve the files) I am placed in a non-standard shell. In order to enter the bash shell I need to enter "run util bash". Is there a way to execute "run util bash" before rsync begins to transfer the files over?
I am open to other suggestions if there is a way to do this using scp/ftp instead of rsync.
One way is to exectue rsync from the server, instead of from the client. An ssh reverse tunnel allows us to temporarily access the local machine from the remote server.
Assume the local machine has an ssh server on port 22
Shh into the remote host while specifying a reverse tunnel that maps a port in the remote machine (in this example let us use 2222) to port 22 in our local machine
Execute your rsync command, replacing any reference to your local machine with the reverse ssh tunnel address: my-local-user#localhost
Add a port option to rsync's ssh to have it use the 2222 port.
The command:
ssh -R 2222:localhost:22 remoteuser#remotemachine << EOF
# we are on the remote server.
# we can ssh back into the box running the ssh client via ${REMOTE_PORT}
run utils bash
rsync -e "ssh -p 2222" --args /path/to/remote/source remoteuser#localhost:/path/to/local/machine/dest
EOF
Reference to pass complicated commands to ssh:
What is the cleanest way to ssh and run multiple commands in Bash?
You can achieve it using --rsync-path also. E.g rsync --rsync-path="run util bash && rsync" -e "ssh -T -c aes128-ctr -o Compression=no -x" ./tmp root#example.com:~
--rsync-path is normally used to specify what program is to be run on the remote machine to start-up rsync. Often used when rsync is not in the default remote-shell’s path (e.g. –rsync-path=/usr/local/bin/rsync). Note that PROGRAM is run with the help of a shell, so it can be any program, script, or command sequence you’d care to run, so long as it does not corrupt the standard-in & standard-out that rsync is using to communicate.
For more details refer
I'm running CentOS 6.
I need to upload some files every hour to another server.
I have SSH access with password to the server. But ssh-keys etc. is not an option.
Can anyone help me out with a .sh script that uploads the files via scp and delete the original after a successful upload?
For this, I'd suggest to use rsync rather than scp, as it is far more powerful. Just put the following in an executable script. Here, I assume that all the files (and nothing more) is in the directory pointed to by local_dir/.
#!/bin/env bash
rsync -azrp --progress --password-file=path_to_file_with_password \
local_dir/ remote_user#remote_host:/absolute_path_to_remote_dir/
if [ $? -ne 0 ] ; then
echo "Something went wrong: don't delete local files."
else
rm -r local_dir/
fi
The options are as follows (for more info, see, e.g., http://ss64.com/bash/rsync.html):
-a, --archive Archive mode
-z, --compress Compress file data during the transfer
-r, --recursive recurse into directories
-p, --perms Preserve permissions
--progress Show progress during transfer
--password-file=FILE Get password from FILE
--delete-after Receiver deletes after transfer, not during
Edit: removed --delete-after, since that's not the OP's intent
Be careful when setting the permissions for the file containing the password. Ideally only you should have access tot he file.
As usual, I'd recommend to play a bit with rsync in order to get familiar with it. It is best to check the return value of rsync (using $?) before deleting the local files.
More information about rsync: http://linux.about.com/library/cmd/blcmdl1_rsync.htm