Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm trying to use rsync on Windows 7. I installed cwRsync and tried to connect to Ubuntu 9.04.
$ rsync -azC --force --more-options ./ user#server:/my/path/
rsync: connection unexpectedly closed (0 bytes received so far) [receiver]
rsync error: error in rsync protocol data stream (code 12) at io.c(600) [receiver=3.0.5]
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(610) [sender=3.0.8]
The trick for me was I had ssh conflict.
I have Git installed on my Windows path, which includes ssh. cwrsync also installs ssh.
The trick is to have make a batch file to set the correct paths:
rsync.bat
#echo off
SETLOCAL
SET CWRSYNCHOME=c:\commands\cwrsync
SET HOME=c:\Users\Petah\
SET CWOLDPATH=%PATH%
SET PATH=%CWRSYNCHOME%\bin;%PATH%
%~dp0\cwrsync\bin\rsync.exe %*
On Windows you can type where ssh to check if this is an issue. You will get something like this:
where ssh
C:\Program Files (x86)\Git\bin\ssh.exe
C:\Program Files\cwRsync\ssh.exe
I saw this when changing rsync versions. In the older version, it worked to say:
rsync -e 'ssh ...
when the rsync.exe and ssh.exe were in the same directory.
With the newer version, I had to specify the path:
rsync -e './ssh ...
and it worked.
I had this problem, but only when I tried to rsync from a Linux (RH) server to a Solaris server. My fix was to make sure rsync had the same path on both boxes, and that the ownership of rsync was the same.
On the linux box, rsync path was /usr/bin, on Solaris box it was /usr/local/bin. So, on the Solaris box I did ln -s /usr/local/bin/rsync /usr/bin/rsync.
I still had the same problem, and noticed ownership differences. On linux it was root:root, on solaris it was bin:bin. Changing solaris to root:root fixed it.
I had this error coming up between 2 Linux boxes. Easily solved by installing RSYNC on the remote box as well as the local one.
This error message probably means that you either mistyped the server name or forgot to start an ssh server at server. Make absolutely certain that an ssh server is running on the server at port 22, and that it's not firewalled. You can test that with ssh user#server.
i get the solution. i've using cygwin and this is the problem the rsync command for Windows work only in windows shell and works in the windows powershell.
A few times it has happened the same error between two linux boxes. and appears to be by incompatible versions of rsync
Related
I'm trying to do a rsync network copy. I'm using homebrew's latest version of rsync. Both source and dest terminals show:
$ which rsync
/usr/local/bin/rsync
$ rsync --version
rsync version 3.1.3 protocol version 31
I can successfully scp a file from the src to dest with:
scp /Users/me/file.txt me#host.local:/Users/me/
However if I try the same with rsync:
rsync -avihX --progress --stats /Users/me/file.txt me#host.local:/Users/me/
I get the following error:
rsync: on remote machine: -vlogDtpXre.iLsfxC: unknown option
rsync error: syntax or usage error (code 1) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/rsync/rsync-52.200.1/rsync/main.c(1337) [server=2.6.9]
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(226) [sender=3.1.3]
I've seen other post here and most say it's either a bad file path or non-matching rsync versions on destination terminal, both of which I've rules out.
That "[server=2.6.9]" part of the message implies you are getting a version mismatch. I'm not sure exactly how it sends the rsync command to the remote end, but it doesn't always use the same PATH (and hence version) that you get interactively. Try adding --rsync-path=/usr/local/bin/rsync (or whatever the appropriate path for rsync v3.1.3 is on the remote computer) to force it to use the right version.
This question already has answers here:
rsync'ing files between two remote servers, get errors stating rsync command not found on remote server
(2 answers)
Closed 7 years ago.
I do the following command
rsync -a toCopy/Read_Files/ toCopy/Test
and it works. However when I try through remote access :
rsync -a toCopy/Read_Files/ root#192.168.155.148:/NightTest/
I got the following message
sh: rsync: command not found
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(605) [sender=3.0.9]
Even though I followed instruction from this site
http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
section Copy a Directory from Local Server to a Remote Server
**See my answer to know what to do if you can't use rsync
I solved the problem using scp command instead, turned out one of the server didn't have rsync.
See this solution for scp command
https://serverfault.com/questions/264595/can-scp-copy-directories
For rsync, this may be useful if you have similar errors that I have but you have rsync on BOTH machine
rsync'ing files between two remote servers, get errors stating rsync command not found on remote server
The error was indeed pointing out to the fact that the remote machine did not have it :
sh: rsync: command not found
This error was only occurring when doing it remotely, indicating that the remote machine was not able to find this command.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I use scp shell command to copy huge folder of files.
But at some point of time I had to kill the running command (by Ctrl+C or kill).
To my understanding scp copied files sequentially, so there should be only one partially copied file.
How can same scp command be resumed to not overwrite successfully copied files and to properly handle partially copied files?
P.S. I know I can do this kind of stuff in rsync, but scp is faster for me for some reason and I use it instead.
You should use rsync over ssh
rsync -P -e ssh remoteuser#remotehost:/remote/path /local/path
The key option is -P, which is the same as --partial --progress
By default, rsync will delete any partially transferred file if the transfer is interrupted. In some circumstances it is more desirable to keep partially transferred files. Using the --partial option tells rsync to keep the partial file which should make a subsequent transfer of the rest of the file much faster.
Other options, such -a (for archive mode), and -z (to enable compression) can also be used.
The manual: https://download.samba.org/pub/rsync/rsync.html
An alternative to rsync:
Use sftp with option -r (recursively copy entire directories) and option -a of sftp's get command "resume partial transfers of existing files."
Prerequisite: Your sftp implementation has already a get with -a option.
Example:
Copy directory /foo/bar from remote server to your local current directory. Directory bar will be created in your local
current directory.
echo "get -a /foo/bar" | sftp -r user#remote_server
Since OpenSSH 6.3, you can use reget command in sftp.
It has the same syntax as the get, except that it starts a transfer from the end of an existing local file.
echo "reget /file/path" | sftp -r user#server_name
The same effect has -a switch to the get command or global command-line -a switch of sftp.
Another possibility is to try to salvage the scp you've already started when it stalls.
ctrl+z to background and stop it, then ssh over to the receiving server and login, then exit. Now fg the scp process and watch it resume from 'stalled'!
When rsync stalls as well after couple of seconds when initially running fine I ended up with the following brute force solution to start and stop an re-start the download every 60s:
cat run_me.sh
#!/bin/bash
while [ 1 ]
do
rsync --partial --progress --rsh=ssh user#host:/path/file.tgz file.tgz &
TASK_PID=$!
sleep 60
kill $TASK_PID
sleep 2
done
You can make use of the -rsh and -P options of rsync. -P is for partial download and -rsh indicates transfer is over ssh procotol.
The complete command would be :
rsync -P -rsh remoteuser#remotehost:/remote/path /local/path
I got the same issue yesterday, transfering a huge sql dump over via scp, I got lucky with wget --continue the_url
That blog post explains it quite well
http://www.cyberciti.biz/tips/wget-resume-broken-download.html basically:
wget --continue url
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
This is a rather odd and embarrassing situation for all involved.
Suppose someone (cough cough not me cough cough) accidentally chmod 000d my home directory on a remote server.
I had been using ssh keys to login, since I figured I would forget the actual password on the remote host (which I have). However, now that my home directory has 000 perms, the ssh key in ~/.ssh/authorized_keys is unreadable, and ssh forces me to put in a password that I have long since forgotten.
Also, I don't have sudo superpowers on the remote server.
HOWEVER, I happen to have an ssh session open to the remote server that started before someone (cough) chmod 000d my home directory.
All of this happened while I was trying to upload some files from my local host to a publicly accessible directory in my home directory.
CAN I STILL UPLOAD FILES FROM MY LOCAL MACHINE TO THE REMOTE MACHINE WITHOUT NEEDING A NEW SSH SESSION?!
I figure I could at least put them in /tmp or something for now.
Yes you can!
Press enter,~,Shift+C to open a ssh command line.
Enter -L 12345:localhost:12345 to forward a new port over your existing SSH connection
Run nc -l -p 12345 | tar xzv on your remote ssh session
Run tar czv FileOrDir1 FileOrDir2 Etc | nc localhost 12345 on your local system.
The files will now transfer over your existing ssh connection, and will appear in the current dir of your remote session.
Why you would want to do this instead of just chmod 711 ~ is beyond me though.
I am running a rsync script that is very simple:
#!/bin/sh
rsync -avz --delete <path> user#hostname:<dest path>
I use it everyday and works fine, today it seems I can't run it and I am not sure why. Biggest change to my system I made is update Java.
The behavior is it just looks like it hangs, and if i let it run long enough I get:
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at /SourceCache/rsync/rsync-42/rsync/io.c(452) [sender=2.6.9]
I am able to rsync from my host to my machine but not vice versa.
The problem lies in my .bash_profile. I removed it and it seemed to work.
Odd thing is when I put it back it seems to still work