rsync run from bash script not preserving ownership - bash

I'm trying to create a bash script which will sync a directory specified as a command line parameter to a remote server (also specified by a parameter). At the moment, I'm using eval, which solves a parameter expansion problem, but for some reason causes rsync not to preserve ownership on the remote files (apart from being Evil, I know). Running the rsync command with all the same flags and parameters from the command prompt works fine.
I tried using $() as an alternative, but I got into a real mess with variable expansion and protecting the bits that need protecting for the remote rsync path (which needs both quotes and backslashes for paths with spaces).
So - I guess 2 questions - is there a reason that eval is preventing rsync from preserving ownership (the bash script is being run as root on the source machine, and sshing to the remote machine as root too - just for now)? And is there a way of getting $() to work in this scenario? The (trimmed) code is below:
#!/bin/bash
RSYNC_CMD="/usr/bin/rsync"
RSYNC_FLAGS="-az --rsh=\"/usr/bin/ssh -i \${DST_KEY}\"" # Protect ${DST_KEY} until it is assigned later
SRC=${1} # Normally this is sense checked and processed to be a canonical path
# Logic for setting DST based on command line parameter snipped for clarity - just directly assign for testing
DST='root#some.server.com:'
DST_KEY='/path/to/sshKey.rsa'
TARG=${DST}${SRC//' '/'\ '} # Escape whitespace for target system
eval ${RSYNC_CMD} ${RSYNC_FLAGS} \"${SRC}\" \"${TARG}\" # Put quotes round the paths - even though ${TARG} is already escaped
# All synced OK - but ownership not preserved despite -a flag
I've tried changing RSYNC_CMD to sudo /usr/bin/rsync, and also adding --rsync-path="sudo /usr/bin/rsync to RSYNC_FLAGS, but neither made any difference. I just can't see what I'm missing...

The correct way to do this is to use an array. -a should already imply -o.
RSYNC_CMD="/usr/bin/rsync"
DST='root#some.server.com:'
DST_KEY='/path/to/sshKey.rsa'
RSYNC_FLAGS=(-az --rsh="/usr/bin/ssh -i ${DST_KEY}")
SRC=${1}
TARG="${DST}$SRC"
${RSYNC_CMD} "${RSYNC_FLAGS[#]}" "${SRC}" "${TARG}"
Using RSYNC_RSH instead of --rsh, you can export the variable before you set its value. This at least lets you put the export in the same area where you set the rest of the flags. Then you can defer completing its value until after you have the correct identity file.
RSYNC_CMD="/usr/bin/rsync"
export RSYNC_RSH="/usr/bin/ssh -i %s" # Use a placeholder for now; set it later
RSYNC_FLAGS=( -a -z )
# Later...
DST='root#some.server.com:'
DST_KEY='/path/to/sshKey.rsa'
RSYNC_RSH=$( printf "$RSYNC_RSH" "$DST_KEY" )
SRC=${1}
TARG="${DST}$SRC"
${RSYNC_CMD} "${RSYNC_FLAGS[#]}" "${SRC}" "${TARG}"

Related

zsh command not found error when setting an alias

Currently trying to move all of my aliases from .bash_profile to .zshrc. However, found a problem with one of the longer aliases I use for substituting root to ubuntu when passing a command to access AWS instances.
AWS (){
cd /Users/user/aws_keys
cmd=$(echo $# | sed "s/root/ubuntu/g")
$cmd[#]
}
The error I get is AWS:5: command not found ssh -i keypair.pem ubuntu#ec1.compute.amazonaws.com
I would really appreciate any suggestions!
The basic problem is that the cmd=$(echo ... line is mashing all the arguments together into a space-delimited string, and you're depending on word-splitting to split it up into a command and its arguments. But word-splitting is usually more of a problem than anything else, so zsh doesn't do it by default. This means that rather than trying to run the command named ssh with arguments -i, keypair.pem, etc, it's treating the entire string as the command name.
The simple solution is to avoid mashing the arguments together, so you don't need word-splitting to separate them out again. You can use a modifier to the parameter expansion to replace "root" with "ubuntu". BTW, I also strongly recommend checking for error when using cd, and not proceeding if it gets an error.
So something like this:
AWS (){
cd /Users/user/aws_keys || return $?
"${#//root/ubuntu}"
}
This syntax will work in bash as well as zsh (the double-quotes prevent unexpected word-splitting in bash, and aren't really needed in zsh).
BTW, I'm also a bit nervous about just blindly replacing "root" with "ubuntu" in the arguments; what if it occurs somewhere other than the username, like as part of a filename or hostname?

Archive files at remover server where file name has "SPACE" and special character

I need to archive files with sys time on remote server but name of the file contains "SPACE" and special character. So below commands are not working.
FileName="BBB ABC#textfile.xml"
ts=`date +"%m%d%Y%H%M%S"`
ssh remoteid#remoteserver "'mv /upload/hotfolders/in/"$FileName"
/upload/hotfolders/Archive/${FileName}_${ts}'"
But above command is failing with below error.
bash: mv /upload/hotfolders/in/BBB ABC#textfile.xml /upload/hotfolders/Archive/BBB ABC#textfile.xml_01282019050200: No
such file or directory
In the original provided code:
ssh remoteid#remoteserver 'cd /upload/hotfolders/; mv "$FileName"
/upload/hotfolders/Archive/"${FileName}_${ts}"'
the outermost ' are used on the local filesystem to keep all the commands as a single argument to ssh. However, this means that $FileName, etc are not expanded locally! Instead, the unexpanded strings are passed verbatim to the remoteserver, where a shell is started to run the command. $FileName, etc, are then expanded there. Because they are not defined there (probably), the expansion fails to produce anything useful.
In the amended version:
ssh remoteid#remoteserver "'mv /upload/hotfolders/in/"$FileName"
/upload/hotfolders/Archive/${FileName}_${ts}'"
there is a different problem. Here, the two sets of outermost " allow the local system to expand the variables (although it may not be obvious that the first $FileName is not actually inside "). However, as the command that is passed is now wrapped in ', the remote server will treat the entire string as a single word.
If we assume that FileName and ts will not contain shell-special characters (such as ') then the fix is to wrap the command sequence in " (so that it expands locally), and only wrap the variables in ' (so that the remote server treats the now-expanded strings as single words):
ssh remoteid#remoteserver "cd /upload/hotfolders/; mv '$FileName'
/upload/hotfolders/Archive/'${Filename}_${ts}'"

Rsync and quotes [duplicate]

This question already has answers here:
Setting an argument with bash [duplicate]
(2 answers)
Closed 6 years ago.
I wrote a bash script with the following:
SRC="dist_serv:$HOME/www/"
DEST="$HOME/www/"
OPTIONS="--exclude 'file.php'"
rsync -Cavz --delete $OPTIONS $SRC $DEST
rsync fails and I can't figure out why, although it seems to be related to the $OPTIONS variable (it works when I remove it). I tried escaping the space with a backslash (among many other things) but that didn't work.
The error message is :
rsync: mkdir "/home/xxx/~/public_html/" failed: No such file or directory (2)
I tried quoting the variable, which throws another error ("unknown option" on my variable $OPTIONS):
rsync: --exclude 'xxx': unknown option
rsync error: syntax or usage error (code 1) at main.c(1422) [client=3.0.6]
You shouldn't put $ in front of the variable names when assigning values to them. SRC is a variable, $SRC is the value that it expands to.
Additionally, ~ is not expanded to the path of your home directory when you put it in quotes. It is generally better to use $HOME in scripts as this variable behaves like a variable, which ~ doesn't do.
Always quote variable expansions:
rsync -Cavz --delete "$OPTIONS" "$SRC" "$DEST"
unless there is some reason not to (there very seldom is). The shell will perform word splitting on them otherwise.
User #Fred points out that you can't use double quotes around $OPTIONS (in in comments below), but it should be ok if you use OPTIONS='--exclude="file.php"' (note the =).
One technique which I find invaluable is using positional parameters to make it easy to work with list of options.
When you put options inside a variable (such as your OPTIONS variable), you need to find a way to include quotes inside the value, and omit quotes when referencing the variable. It works, but you are always one typo away from a difficult to debug failure.
Instead, try the following.
set -- -Cavz --delete
set -- "$#" --exclude "file.php"
set -- "$#" "dist_serv:~/www/"
set -- "$#" "~/www/"
rsync "$#"
Of course, in this case, everything could be on the same line, but in many cases there will be conditional expressions so that, for instance, you can omit a given option, or select difference files to work with. The nice thing is, you always use the same quoting you would use on a single command line, all thanks to the magic of "$#" that avoids having to reference (or quote) any specific variable.
If actual positional parameters get in the way, you can put them in variables, or create a function to isolate a context that avoids touching them where they matter.
I use this trick all the time, and I have stopped pulling my hair out due to quoting causing problems inside values I pass as parameter to commands.
A similar result can be achieved by using an array.
declare -a ARGUMENTS=()
ARGUMENTS=(-Cavz --delete )
ARGUMENTS+=(--exclude "file.php")
ARGUMENTS+=("dist_serv:~/www/")
ARGUMENTS+=("~/www/")
rsync "${ARGUMENTS[#]}"

Shell script: shorten or aliasing an address after a command

I want to abbreviate or set an alias to a destination address every time I use while copying files. For example,
scp <myfile> my_destination
where my_destination could be hbaromega#192.168.1.100:Documents. So I want to modify my .bash_profile by inserting something like
alias my_destination = 'hbaromega#192.168.1.100:Documents' .
But that doesn't work since my_destination is not a command.
Is there a way out?
Note: I don't want to abbreviate the whole command, but only the address, so that I can use it with other possible commands.
You can't do what you want for the reason you state (an alias defines an entire command). But you could use a shell function to come close:
my_scp() {
scp "$#" hbaromega#192.168.1.100:Documents/.
}
which you could then call as
my_scp *.c
(Using $# in doublequotes is shell black magic that avoids trouble if any of the file names matched by the *.c glob contain spaces)
Of course, if you don't want to define a function, you could always use a shell variable to at least save the retyping:
dest='hbaromega#192.168.1.100:Documents/.'
scp *.c $dest
You have a couple options. You can set hostname aliases in your ~/.ssh/config like this:
Host my_destination
Hostname 192.168.1.100
User hbaromega
You could use it like this:
$ scp myfile my_destination:Documents/
Note that you'd still have to specify the default directory.
Another option would be to just put an environment variable in your ~/.bashrc:
export my_destination='hbaromega#192.168.1.100:Documents/'
Then you could use it like this:
$ scp myfile $my_destination
BertD's approach of defining a function would also work.
I think this works without using export as well since anyway I am assigning a variable for the path or destination. So I can just put the following in my .basrc or .bash_profile :
my_destination='hbaromega#192.168.1.100:Documents/'
Then
scp <myfile> $my_destination
Similarly I can execute any action (e.g. moving a file)for any local destination or directory:
local_dest='/Users/hbaromega/Documents/'
and then
mv <myfile> $local_dest
In summary, a destination address can be put as a variable, but not as a shell command or function.
The reason it does not work is there are spaces surrounding the = sign. As pointed out, an alias must be called as the first part of the command string. You are more likely to get the results you need by exporting my_destination and then calling it with a $. In ~/.bashrc:
export my_destination='hbaromega#192.168.1.100:Documents'
Then:
scp <myfile> $my_destination
Note: you will likely need to provide a full path to Documents in the export.

GLOBIGNORE does not work with sudo command

I'm using a mini shell script in order to 'tail' (in real time) a bunch of log files.
#!/bin/sh
oldGLOBIGNORE=$GLOBIGNORE
export GLOBIGNORE='foo-bar.log'
sudo -E tail -f -n0 /var/log/*.log
GLOBIGNORE=$oldGLOBIGNORE
As you can see, I want to log all files except the one named foo-bar.log.
the -E option of sudo should allow me to keep the GLOBIGNORE variable but it looks like it does not work.
I'm testing on Ubuntu 10.04, bash 4.1.5.
Any clue ?
Firstly — GLOBIGNORE relates to the full filepath resulting from filename-expansion, not just the last part. So you actually want to write GLOBIGNORE='/var/log/foo-bar.log'.
Secondly — you don't actually need to export GLOBIGNORE into the environment and add -E, because the /var/log/*.log gets expanded by Bash before it even invokes sudo.
Thirdly — your approach to saving the old value of GLOBIGNORE and restoring it afterward is less than ideal, because the behavior when GLOBIGNORE is unset is different from its behavior when it's set-but-empty, and your script can never restore it to being unset. Fortunately, the script doesn't need to restore it (since it's not as though a script's variables could continue to have effect after the script returns), so you can just remove that stuff.
All told, you can write:
#!/bin/sh
GLOBIGNORE=/var/log/foo-bar.log
sudo tail -f -n0 /var/log/*.log

Resources