Just for running a test I want to put an image file into one of my instance folders from my Desktop.
I've tried solutions provided at this same topic question:
Rsync to Amazon Ec2 Instance
So I've tried:
sudo rsync -azv --progress -e "ssh -i ~/.ssh/MyKeyPair.pem" \ ~/Desktop/luffy.jpg \ec2-user#xx.xx.xx.xxx:/home/ec2-user/myproject/mysite/mysite/media
~/.ssh/ is where MyKeyPair.pem is located. In fact, to enter via ssh I do first cd ~/.ssh and then I run the ssh -i ... command.
But I'm getting this error:
Warning: Identity file ~/.ssh/MyKeyPair.pem not accessible: No such file or directory.
Permission denied (publickey).
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6]
I've read on another Q&A page someone who got this same error reporting he solved it by just installing rsync via yum. In my case it is already installed (version 3.0.6).
I would be grateful if anyone can help!
For copying local files to EC2, the rsync command should be run on your local system, not on the EC2 instance.
The tilde (~) will not be shell expanded to your home directory if it is inside quotes. Try using $HOME instead.
If you are using sudo on the local side, then you probably want to use sudo on the remote (e.g., to copy over file ownerships). This can be done with the appropriate --rsync-path option.
I recommend including the options -SHAX to more closely preserve the files on the target system.
If "media" is supposed to be a subdirectory, then a trailing slash will help avoid some oddities if it does not currently exist.
End result:
sudo rsync -azv -SHAX --progress -e "ssh -i $HOME/.ssh/MyKeyPair.pem" \
--rsync-path "sudo rsync" \
~/Desktop/luffy.jpg \
ec2-user#xx.xx.xx.xxx:/home/ec2-user/myproject/mysite/mysite/media/
Here's an old article where I write about using rsync with EC2 instances. You can replace "ubuntu" with "ec2-user" for Amazon Linux.
http://alestic.com/2009/04/ubuntu-ec2-sudo-ssh-rsync
If this not solve your problem, please provide more details about what exact command you are running where and what exact error messages you are getting.
Great! This worked with a slight modification. Removed sudo:
sudo rsync -azv --progress -e "ssh -i $HOME/<path_to>" \
--rsync-path "rsync" \
<source> \
<target>
Related
I'm trying to use a vagrant file I received to set up a VM in Ubuntu with virtualbox.
After using the vagrant up command I get the following error:
File provisioner:
* File upload source file /home/c-server/tools/appDeploy.sh must exist
appDeploy.sh does exist in the correct location and looks like this:
#!/bin/bash
#
# Update the app server
#
/usr/local/bin/aws s3 cp s3://dev-build-ci-server/deploy.zip /tmp/.
cd /tmp
unzip -o deploy.zip vagrant/tools/deploy.sh
cp -f vagrant/tools/deploy.sh /tmp/.
rm -rf vagrant
chmod +x /tmp/deploy.sh
dos2unix /tmp/deploy.sh
./deploy.sh
rm -rf ./deploy.sh ./deploy.zip
#
sudo /etc/init.d/supervisor stop
sudo /etc/init.d/supervisor start
#
Since the script exists in the correct location, I'm assuming it's looking for something else (maybe something that should exist on my local computer). What that is, I am not sure.
I did some research into what the file provisioner is and what it does but I cannot find an answer to get me past this error.
It may very well be important that this vagrant file will work correctly on Windows 10, but I need to get it working on Ubuntu.
In your Vagrantfile, check that the filenames are capitalized correctly. Windows isn't case-sensitive but Ubuntu is.
We have an Ubuntu server which runs Nginx for hosting webapps. We deploy to that server by using a shell script which contains a rsync command. We only want to transfer files which have content changes (no metadata). But now when I deploy and another user has done a deployment before, all my files reported as changed. By this we can't see if only the latest changes are getting deployed (and if we are missing some files from a submodule). When i run rsync multiple times on my environment, changes are reported like expected.
Example:
rsync -rltz --progress --stats --delete \
--perms \
--chmod=u=rwX,g=rwX,o=rX \
--exclude='- node_modules' \
--rsh "ssh" \
--rsync-path "sudo rsync" sourceDir user#domain:targetDir
Does anyone have any idea how files can be transferred from multiple users to a server only when there are content changes?
If I understand correctly you are looking for the -c, or --checksum, option.
Using vagrant 1.8.1, when trying to do a "vagrant up --provider virtualbox" for a box that has already been init'd, I get an "Error: Could not create directory '/home/username/.ssh'."
Per directions here - https://atlas.hashicorp.com/centos/boxes/7the following:
command:
vagrant init centos/7; vagrant up --provider virtualbox
Output:
There was an error when attempting to rsync a synced folder.
Please inspect the error message below for more info.
Host path: /cygdrive/c/VMs/vagrant/centos7-util/
Guest path: /home/vagrant/sync
Command: rsync --verbose --archive --delete -z --copy-links --chmod=ugo=rwX --no-perms --no-owner --no-group --rsync-path sudo rsync -e ssh -p 2222 -o ControlMaster=auto -o ControlPath=C:/DEV/cygwin64/tmp/ssh.540 -o ControlPersist=10m -o StrictHostKeyChecking=no -o IdentitiesOnly=true -o UserKnownHostsFile=/dev/null -i 'C:/VMs/vagrant/centos7-util/.vagrant/machines/default/virtualbox/private_key' --exclude .vagrant/ /cygdrive/c/VMs/vagrant/centos7-util/ vagrant#127.0.0.1:/home/vagrant/sync
Error: Could not create directory '/home/username/.ssh'.
Warning: Permanently added '[127.0.0.1]:2222' (ECDSA) to the list of known hosts.
mm_receive_fd: no message header
process_mux_new_session: failed to receive fd 0 from slave
mux_client_request_session: read from master failed: Connection reset by peer
Failed to connect to new control master
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(226) [sender=3.1.2]
I originally didn't have a /home/username/.ssh directory, and so I tried with a manually created directory, and then also with a symlink to my existing c:/users/username/.ssh directory, but always get this same error.
update: I tried reverting to vagrant 1.7.4, and get the same error. Also, this occurs when trying to do a vagrant up via git bash, cygwin, or windows cmd prompt.
For my case, it appears that this error is only occuring with this specific box. From too much additional troubleshooting, I finally found that using most any other box works fine, e.g. https://github.com/CommanderK5/packer-centos-template/releases/download/0.7.1/vagrant-centos-7.1.box.
I hope that this saves someone else some time.
Under Windows, RSync will try and update the %HOME%/.ssh/known_hosts file. If %HOME% is not defined as one of your environment variables, it may try and add/update that file where it has no permissions, and fail. Solution: Set user environment variable HOME to be %USERPROFILE%.
source: https://github.com/mitchellh/vagrant-aws/wiki/Common-Pitfalls
I am currently trying to set up kubernetes on a multi-docker container on CoreOS stack for AWS. To do this I need to set up etcd for flannel and am currently using this guide but am having problems at the first stage where I am suggested to run
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
The problem is the 1st command
docker -d -H unix:///var/run/docker-bootstrap.sock
from within boot2docker. There is no docker-bootstrap.sock file in this directory and this error is thrown:
FATA[0000] An error occurred trying to connect: Post https:///var/run/docker-bootstrap.sock/v1.18/containers/create: dial unix /var/run/docker-bootstrap.sock: no such file or directory
Clearly the unix socket did not connect to this nonexistent socket.
I will note this is a very similar problem to this ticket and other tickets regarding the FATA[0000] though none seem to have asked the question in the way I currently am.
I am not an expert in unix sockets, but I am assuming there should be a file where there is not. Where can I get this file to solve my issue, or what is the recommended steps to resolve this.
specs: running OSX Yosemite but calling all commands from boot2docker
Docker should create this file for you. Are you running this command on your OS X machine? or are you running it inside the boot2docker VM?
I think you need to:
boot2docker ssh
Then:
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
You need to make sure that command runs on the Vagrant Linux box that boot2docker creates, not your OS X machine.
Hope that helps!
I'm very new to lftp, so forgive my ignorance.
I just ran a dry run of my lftp script, which consists basically of a line like this:
mirror -Rv -x regexp --only-existing --only-newer --dry-run /local/root/dir /remote/dir
When it prints what it's going to do, it wants to chmod a bunch of files - files which I grabbed from svn, never modified, and which should be identical to the ones on the server.
My local machine is Ubuntu, and the remote is a Windows server. I have a few questions:
Why is it trying to do that? Does it try to match file permissions from the local with the remote?
What will happen when it tries to chmod the files? As I understand it, Windows doesn't support chmod - will it just fail gracefully and leave the files alone?
Many thanks!
Use the -p option and it shouldn't try to change permissions. I've never sent to a windows host, but you are correct in that it shouldn't do anything to the permission levels on the windows box.
I think that you should try
lftp -e "mirror -R $localPath $remotePath; chmod -R 777 $remotePath; bye" -u $username,$password $host