sudo rsync locally lack of permission - macos

What is the command to LOCALLY Rsync a bunch of folders that by default rsync doesnt have the necessary rights for. (in the terminal i have to sudo rsync for that matter). But in a shell script it works a little different.
I have been reading about
rsync --rsh-path="sudo rsync" -aq...
and another said:
rsync --rsh="ssh me#Mac sudo" -aq...
And all the others are talking about remote rsyncing.
but none of local ones seem to work.
Someone can shine a light on this?
cheerz!

if you work locally just run rsync with escalated rights and leave out all the other options like so:
sudo rsync -avP source target
you can replace -avP of course to fit your needs about what information gets transferred and what output is generated.

Related

Rsync error: The source and destination cannot both be remote

I am able to Rsync files from my remote server to my local machine.
rsync -av [remote-server]:web/wp-content/uploads
Users/[username]/Documents/workprojects/iis-tech/web/wp-content
Thats fine, but when I try to rsync from my local machine to my remote server I get the following error:
The source and destination cannot both be remote.
rsync error: syntax or usage error
The command I am running is as follows:
rsync -av /Users/[username]/Documents/workprojects/[project-folder]/web/wp-content/themes/reactify/js/build/
[remote-server]:web/wp-content/themes/reactify/js/build
I separated the command for readability.
I am using Platform.sh as the host if that makes a difference, but I don't think that is the issue.
The reason why I am confused is because my coworkers are able to run the same command successfully.
Any help is appreciated!
Use rsync -av /cygdrive/c.... /cygdrive/d/....
That is, replace C: with /cygdrive/c

How to make a bash function to run a bidirectional rsync

I have a local folder and a remote one on a server with ssh connection. I don't have admin privileges so installing new packages are not possible to use unison for example. I have to sync these two folders quite often and they are also big. From here I know that to sync in both sides I have to rsync twice. once from server to local:
rsync -Przzuve ssh user#server:/path/to/remote/folder/* /path/to/local/folder
and then the other way around, from local to server
rsync -Przzuve ssh /path/to/local/folder/* user#server:/path/to/remote/folder
What I want to have is a single command like:
rsyncb /path/to/local/folder user#server:/path/to/remote/folder
To just sync the content of two folders in both directions in one command without worrying about the -* options and /* at the end of the first path...
I found this about making a bash function with given arguments but I do not understand how to implement what I want. I would appreciate if you could help me with this.
Just define a function.
rsyncb() {
rsync -Przzuve ssh "$1"/* "$2"
rsync -Przzuve ssh "$2"/* "$1"
}
Then use it like this:
rsyncb user#server:/path/to/remote/dir /path/to/local/dir

Copy website from server to local in terminal

I've had a look on google and here on stack but can't find a good example on how to do this.
All I basically want to do is SSH into a server copy all the site files and paste them into a folder on my computer?
I normally use git but this is an old site which has not been setup with git so I just wanted to know a quick way to copy from the server as FTP sucks!
A simple process with commands for terminal would be great!
Check out rsync. It has the capability to operate over ssh. You might also want to look into ssh aliases (which it also honors) when copying files over, and it's what git uses to only sync the differences between two repositories.
The advantage of rsync over SCP or SFTP is that it can resume download if interrupted, takes little bandwidth to sync since it sends change sets instead of entire files (unless the file doesn't yet exist on one side), and can do one- or two-way sync depending on your preference.
ssh USER#SERVER "tar zcvf - /DUMP_DIR" | cat > /OUT_DIR/FILE_NAME_OF_ARCH
or
(rsync -avz --delete /DUMP_DIR USER#SERVER:/OUT_DIR &)
Look at SCP.
scp username#remotehost.com:/directoryname/* /some/local/directory
Use scp
scp -P 2222 json-serde-1.1.8-SNAPSHOT-jar-with-dependencies.jar root#127.0.0.1:
For Example.
Hope that helps!

Cronjob on CentOS, upload files via scp and delete on success

I'm running CentOS 6.
I need to upload some files every hour to another server.
I have SSH access with password to the server. But ssh-keys etc. is not an option.
Can anyone help me out with a .sh script that uploads the files via scp and delete the original after a successful upload?
For this, I'd suggest to use rsync rather than scp, as it is far more powerful. Just put the following in an executable script. Here, I assume that all the files (and nothing more) is in the directory pointed to by local_dir/.
#!/bin/env bash
rsync -azrp --progress --password-file=path_to_file_with_password \
local_dir/ remote_user#remote_host:/absolute_path_to_remote_dir/
if [ $? -ne 0 ] ; then
echo "Something went wrong: don't delete local files."
else
rm -r local_dir/
fi
The options are as follows (for more info, see, e.g., http://ss64.com/bash/rsync.html):
-a, --archive Archive mode
-z, --compress Compress file data during the transfer
-r, --recursive recurse into directories
-p, --perms Preserve permissions
--progress Show progress during transfer
--password-file=FILE Get password from FILE
--delete-after Receiver deletes after transfer, not during
Edit: removed --delete-after, since that's not the OP's intent
Be careful when setting the permissions for the file containing the password. Ideally only you should have access tot he file.
As usual, I'd recommend to play a bit with rsync in order to get familiar with it. It is best to check the return value of rsync (using $?) before deleting the local files.
More information about rsync: http://linux.about.com/library/cmd/blcmdl1_rsync.htm

cp command fails when run in a script called by Hudson

This one is a puzzler. If I run a command from the command line to copy a file remotely it works perfectly. If I run that same command inside a script on the server (that hosts Hudson), it runs perfectly as well, same for running the job as hudson from the command line. However, if I run that exact command as a function inside a bash script from a Hudson job, it fails with:
cp: cannot stat '/opt/flash_board.tar.gz': No such file or directory
The variable is defined as:
original_tarball=flash_board.tar.gz
and is in scope (variable expansion works correctly in the script).
The original command is:
ssh -n -o stricthostkeychecking=no root#$IP_ADDRESS ssh -n -o stricthostkeychecking=no 169.254.0.2 cp /opt/$original_tarball /opt/$original_tarball.bak
I've also tried it as:
ssh -n -p 1601 -o stricthostkeychecking=no root#$IP_ADDRESS cp /opt/$original_tarball /opt/$original_tarball.bak
which points to the correct port, but fails in exactly the same way.
For reference all the variables have been checked to be valid. I originally thought this was a substitution error, but that doesn't seem to be the case, so then I tried running it with Hudson credentials as:
sudo -u hudson ssh -n -o stricthostkeychecking=no root#$IP_ADDRESS ssh -n -o stricthostkeychecking=no 169.254.0.2 cp /opt/$original_tarball /opt/$original_tarball.bak
I get the exact same results (it works). So it's only when this command is run from a Hudson job that it fails.
Here's the sequence of events:
Hudson job sets parameters & calls a shell script.
A function inside the script tries to copy the files remotely from an embedded Montevista (Linux) board across an SPI bus to a second embedded Arago (Linux) board
Both boards are physically on the same mother board, but there's no way to directly access the Arago board except through a serial console session (which isn't feasible, this is an automation job that runs across the network).
I've tried this using ssh with -p 1601 (the correct port to the Arago side).
Can I use scp to copy a remote file to the same location as the remote file with a different file extension?
Something like:
scp -o stricthostkeychecking=no root#$IP_ADDRESS /opt/$original_tarball /opt/$original_tarball.bak
I had a couple of the devs take a look at this and they were stumped as well. Anyone got any ideas (A) why this fails & (B) how to work around it. I'm pretty sure I can write a script to run locally on the remote machine, but that doesn't seem like it should be necessary.
Oh, and if I run the exact same command on the Montevista board (which means I don't have to go across the SPI bus (169.254.0.2), it works perfectly from the Hudson job.
So, this turned out to be something completely unrelated to the question. I broke the problem down into little pieces with a test Hudson script, adding more and more complexity from the original script till it failed as before.
It turned out to be pilot error, I'd written an if statement to differentiate between the two boards (Arago & Montevista) and then abstracted out the variables passed to the if statement to the point where it was ambiguous which board was being passed in, so the if logic always grabbed the first match (as it should) and the flash script I was trying to copy on the Arago board didn't exist on the Montevista board (well, it has a different name) so the error returned was absolutely correct.
Sorry for the spin up and thanks for all the effort to help.
cp: cannot stat '/opt/flash_board.tar.gz': No such file or directory
This is saying that Hudson cannot see the file. I would do a ls -la /opt in that shell script of yours. This will show you the permissions on the /opt directory, and whether your script can list that file.
While you're at it, do a du -f on the Hudson machine too and see if that /opt directory is a remote mount or something that could be problematic.
You've already said that you logged in as the user that runs the Hudson task and execute it from the workspace directory.
Right now, I suspect that the directory permission is an issue.
The obvious way that goes wrong is that somehow it is being run on the wrong machine, possibly due to either a line length limit, or to weird quoting issues.
I'd try changing the command to … uname -a or … hostname -f to see if you get the right machine. Or, alternatively, … cp /proc/cpuinfo /tmp/this-machine and then see which machine gets the file.
edit: I see now that OP has answered his own question. I guess I'll leave this here in case it helps any future visitors with similar issues. I guess I should add "or not running the command you thing you're running" to the reasons why it could happen.

Resources