How to resume scp with partially copied files? [closed] - bash

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I use scp shell command to copy huge folder of files.
But at some point of time I had to kill the running command (by Ctrl+C or kill).
To my understanding scp copied files sequentially, so there should be only one partially copied file.
How can same scp command be resumed to not overwrite successfully copied files and to properly handle partially copied files?
P.S. I know I can do this kind of stuff in rsync, but scp is faster for me for some reason and I use it instead.

You should use rsync over ssh
rsync -P -e ssh remoteuser#remotehost:/remote/path /local/path
The key option is -P, which is the same as --partial --progress
By default, rsync will delete any partially transferred file if the transfer is interrupted. In some circumstances it is more desirable to keep partially transferred files. Using the --partial option tells rsync to keep the partial file which should make a subsequent transfer of the rest of the file much faster.
Other options, such -a (for archive mode), and -z (to enable compression) can also be used.
The manual: https://download.samba.org/pub/rsync/rsync.html

An alternative to rsync:
Use sftp with option -r (recursively copy entire directories) and option -a of sftp's get command "resume partial transfers of existing files."
Prerequisite: Your sftp implementation has already a get with -a option.
Example:
Copy directory /foo/bar from remote server to your local current directory. Directory bar will be created in your local
current directory.
echo "get -a /foo/bar" | sftp -r user#remote_server

Since OpenSSH 6.3, you can use reget command in sftp.
It has the same syntax as the get, except that it starts a transfer from the end of an existing local file.
echo "reget /file/path" | sftp -r user#server_name
The same effect has -a switch to the get command or global command-line -a switch of sftp.

Another possibility is to try to salvage the scp you've already started when it stalls.
ctrl+z to background and stop it, then ssh over to the receiving server and login, then exit. Now fg the scp process and watch it resume from 'stalled'!

When rsync stalls as well after couple of seconds when initially running fine I ended up with the following brute force solution to start and stop an re-start the download every 60s:
cat run_me.sh
#!/bin/bash
while [ 1 ]
do
rsync --partial --progress --rsh=ssh user#host:/path/file.tgz file.tgz &
TASK_PID=$!
sleep 60
kill $TASK_PID
sleep 2
done

You can make use of the -rsh and -P options of rsync. -P is for partial download and -rsh indicates transfer is over ssh procotol.
The complete command would be :
rsync -P -rsh remoteuser#remotehost:/remote/path /local/path

I got the same issue yesterday, transfering a huge sql dump over via scp, I got lucky with wget --continue the_url
That blog post explains it quite well
http://www.cyberciti.biz/tips/wget-resume-broken-download.html basically:
wget --continue url

Related

rsync over ssh results in 0 files, but no error message

I'm trying to rsync a large directory of around 200 GB from a server to my local external hard drive. I can ssh onto the server and see the directory fine. I can also cd into the external hard drive fine. When I try and rsync the file across, I don't get an error, but the last line of the rsync output is 'total size is 0 speedup is 0.00', and there are no files in the destination directory.
Here's how I ssh onto the server successfully:
ssh skgtmdf#live.rd.ucl.ac.uk
Here's my rsync command:
rsync -avrt -progress -e "ssh skgtmdf#live.rd.ucl.ac.uk:/mnt/gpfs/live/rd01__/ritd-ag-project-rd012x-smmca23/" "/Volumes/DUAL DRIVE/ONP/2022.08.10_results/"
And here's the rsync output:
sending incremental file list
drwxrwxrwx 65,536 2022/08/10 21:32:06 .
sent 57 bytes received 64 bytes 242.00 bytes/sec
total size is 0 speedup is 0.00
What am I doing wrong?
The way you have it quoted, the source path is part of the remote shell option (-e value) rather than a separate argument as it should be.
rsync -avrt -progress -e "ssh skgtmdf#live.rd.ucl.ac.uk:/mnt/gpfs/live/rd01__/ritd-ag-project-rd012x-smmca23/" "/Volumes/DUAL DRIVE/ONP/2022.08.10_results/"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is all part of the `-e` option value
This means rsync doesn't see that as a sync source at all, but just part of the command it'll use to connect to the remote system. I'm not sure why this doesn't lead to an error. In any case, the fix is simple: don't include ssh with the source path.
As I noticed later (see comments) the --progress option needs a double-dash or it'll be wildly misparsed. Fixing both of these things gives:
rsync -avrt --progress -e ssh "skgtmdf#live.rd.ucl.ac.uk:/mnt/gpfs/live/rd01__/ritd-ag-project-rd012x-smmca23/" "/Volumes/DUAL DRIVE/ONP/2022.08.10_results/"
In fact, since ssh is the default command for making a remote connection, you can leave off -e ssh entirely:
rsync -avrt --progress "skgtmdf#live.rd.ucl.ac.uk:/mnt/gpfs/live/rd01__/ritd-ag-project-rd012x-smmca23/" "/Volumes/DUAL DRIVE/ONP/2022.08.10_results/"
rsync -azve ssh user#host:/src/ target/
Normally, you don't need to wrap -e flag with ". It's probably messing the connection string

Copy website from server to local in terminal

I've had a look on google and here on stack but can't find a good example on how to do this.
All I basically want to do is SSH into a server copy all the site files and paste them into a folder on my computer?
I normally use git but this is an old site which has not been setup with git so I just wanted to know a quick way to copy from the server as FTP sucks!
A simple process with commands for terminal would be great!
Check out rsync. It has the capability to operate over ssh. You might also want to look into ssh aliases (which it also honors) when copying files over, and it's what git uses to only sync the differences between two repositories.
The advantage of rsync over SCP or SFTP is that it can resume download if interrupted, takes little bandwidth to sync since it sends change sets instead of entire files (unless the file doesn't yet exist on one side), and can do one- or two-way sync depending on your preference.
ssh USER#SERVER "tar zcvf - /DUMP_DIR" | cat > /OUT_DIR/FILE_NAME_OF_ARCH
or
(rsync -avz --delete /DUMP_DIR USER#SERVER:/OUT_DIR &)
Look at SCP.
scp username#remotehost.com:/directoryname/* /some/local/directory
Use scp
scp -P 2222 json-serde-1.1.8-SNAPSHOT-jar-with-dependencies.jar root#127.0.0.1:
For Example.
Hope that helps!

How to add confirm in shell script in sftp mode [duplicate]

This question already has answers here:
How can I force ssh to accept a new host fingerprint from the command line?
(9 answers)
Closed 3 years ago.
I have a shell script which is performing some renaming and archiving steps. I have added sftp commands to copy multiple files. But when i try to login to the remote machine thru putty it asks for confirmation like Are you sure you want to continue connecting (yes/no)? . I need to enter yes. but since this is being done thru the script am not sure how to do it. below is the script i am using
cd <File Source Location>
sftp user1#remoteserver
sftp> cd <target location to be copied>
sftp> mput *.gz
quit
how to i pass yes in the above code after sftp user1#remoteserver is executed.
Any help is much appreciated.
I think that you are trying to solve the wrong problem.
sftp asks you for confirmation because it does not know the key of the host yet. Therefore you need to add it to you known_hosts file like this
ssh-keyscan -H remoteserver >> ~/.ssh/known_hosts
I recommend using scp command instead of sftp as it will do all you want in one step.
scp somewhere/*.gz user1#remoteserver:somewhere/else
If for some reason you don't want to do it. You may consider a very insecure command
sftp -o StrictHostKeyChecking=no user1#remoteserver
By using the command above you are vulnerable to Man in the Middle attack, you've been warned.

Run ssh and immediately execute command [duplicate]

This question already has answers here:
Can I ssh somewhere, run some commands, and then leave myself a prompt?
(5 answers)
Closed 7 years ago.
I'm trying to find UNIX or bash command to run a command after connecting to an ssh server. For example:
ssh name#ip "tmux list-sessions"
The above code works, it lists the sessions, but it then immediately disconnects. Putting it in the sshrc on the server side works, but I need to be able to type it in client side. I want to be able to run a command, it logs in, opens up the window, then runs the command I've set. Ive tried
[command] | ssh name#ip
ssh name#ip [command]
ssh name#ip "[command]"
ssh -t name#ip [command]
ssh -t 'command; bash -l'
will execute the command and then start up a login shell when it completes. For example:
ssh -t user#domain.example 'cd /some/path; bash -l'
This isn't quite what you're looking for, but I've found it useful in similar circumstances.
I recently added the following to my $HOME/.bashrc (something similar should be possible with shells other than bash):
if [ -f $HOME/.add-screen-to-history ] ; then
history -s 'screen -dr'
fi
I keep a screen session running on one particular machine, and I've had problems with ssh connections to that machine being dropped, requiring me to re-run screen -dr every time I reconnect.
With that addition, and after creating that (empty) file in my home directory, I automatically have the screen -dr command in my history when my shell starts. After reconnecting, I can just type Control-P Enter and I'm back in my screen session -- or I can ignore it. It's flexible, but not quite automatic, and in your case it's easier than typing tmux list-sessions.
You might want to make the history -s command unconditional.
This does require updating your $HOME/.bashrc on each of the target systems, which might or might not make it unsuitable for your purposes.
You can use the LocalCommand command-line option if the PermitLocalCommand option is enabled:
ssh username#hostname -o LocalCommand="tmux list-sessions"
For more details about the available options, see the ssh_config man page.

Cronjob on CentOS, upload files via scp and delete on success

I'm running CentOS 6.
I need to upload some files every hour to another server.
I have SSH access with password to the server. But ssh-keys etc. is not an option.
Can anyone help me out with a .sh script that uploads the files via scp and delete the original after a successful upload?
For this, I'd suggest to use rsync rather than scp, as it is far more powerful. Just put the following in an executable script. Here, I assume that all the files (and nothing more) is in the directory pointed to by local_dir/.
#!/bin/env bash
rsync -azrp --progress --password-file=path_to_file_with_password \
local_dir/ remote_user#remote_host:/absolute_path_to_remote_dir/
if [ $? -ne 0 ] ; then
echo "Something went wrong: don't delete local files."
else
rm -r local_dir/
fi
The options are as follows (for more info, see, e.g., http://ss64.com/bash/rsync.html):
-a, --archive Archive mode
-z, --compress Compress file data during the transfer
-r, --recursive recurse into directories
-p, --perms Preserve permissions
--progress Show progress during transfer
--password-file=FILE Get password from FILE
--delete-after Receiver deletes after transfer, not during
Edit: removed --delete-after, since that's not the OP's intent
Be careful when setting the permissions for the file containing the password. Ideally only you should have access tot he file.
As usual, I'd recommend to play a bit with rsync in order to get familiar with it. It is best to check the return value of rsync (using $?) before deleting the local files.
More information about rsync: http://linux.about.com/library/cmd/blcmdl1_rsync.htm

Resources