I have a script that transfers files. Everytime I run it It needs to connect to a different host. That's why I'm adding the host as parameter.
The script is executed as: ./transfer.sh <hostname>
#!/bin/bash -evx
SSH="ssh \
-o UseRoaming=no \
-o UserKnownHostsFile=/dev/null \
-o StrictHostKeyChecking=no \
-i ~/.ssh/privateKey.pem \
-l ec2-user \
${1}"
files=(
file1
file2
)
files="${files[#]}"
# this works
$SSH
# this does not work
rsync -avzh --stats --progress $files -e $SSH:/home/ec2-user/
# also this does not work
rsync -avzh --stats --progress $files -e $SSH ec2-user#$1:/home/ec2-user/
I can properly connect with the ssh connection stored in $SSH, but the rsync connection attempts fails because of the wrong key:
Permission denied (publickey).
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(226) [sender=3.1.2]
What would be the correct syntax for the rsync connection?
Write set -x before the rsync line and watch how the arguments are expanded. I believe it will be wrong.
You need to enclose the ssh command with arguments (without hostname) into the quotes, otherwise the arguments will get passed to the rsync command and not to the ssh.
My solution after Jakuje pointed me in the right direction:
#!/bin/bash -evx
host=$1
SSH="ssh \
-o UseRoaming=no \
-o UserKnownHostsFile=/dev/null \
-o StrictHostKeyChecking=no \
-i ~/.ssh/privateKey.pem \
-l ec2-user"
files=(
file1
file2
)
files="${files[#]}"
# transfer all in one rsync connection
rsync -avzh --stats --progress $files -e "$SSH" $host:/home/ec2-user/
# launch setup script
$SSH $host ./setup.sh
Related
Team, I have two steps to perform:
SCP a shell script file to remote ubuntu linux machine
Execute this uploaded file on remote ubuntu linux machine over SSH session using PROXYCommand because I have bastion server in front.
Code:
scp -i /home/dtlu/.ssh/key.key -o "ProxyCommand ssh -i /home/dtlu/.ssh/key.key lab#api.dev.test.com -W %h:%p" /home/dtlu/backup/test.sh lab#$k8s_node_ip:/tmp/
ssh -o StrictHostKeyChecking=no -i /home/dtlu/.ssh/key.key -o 'ProxyCommand ssh -i /home/dtlu/.ssh/key.key -W %h:%p lab#api.dev.test.com' lab#$k8s_node_ip "uname -a; date;echo "Dummy123!" | sudo -S bash -c 'echo 127.0.1.1 \`hostname\` >> /etc/hosts'; cd /tmp; pwd; systemctl status cachefilesd | grep Active; ls -ltr /tmp/test.sh; echo "Dummy123!" | sudo -Sv && bash -s < test.sh"
Both calls above are working fine. I am able to upload test.sh and also its running but what is bothering me is during the process am observe weird output being thrown out.
output:
/tmp. <<< expected
[sudo] password for lab: Showing one
Sent message type=method_call sender=n/a destination=org.freedesktop.DBus object=/org/freedesktop/DBus interface=org.freedesktop.DBus member=Hello cookie=1 reply_cookie=0 error=n/a
Root directory /run/log/journal added.
Considering /run/log/journal/df22e14b1f83428292fe17f518feaebb.
Directory /run/log/journal/df22e14b1f83428292fe17f518feaebb added.
File /run/log/journal/df22e14b1f83428292fe17f518feaebb/system.journal added.
So, I don't want /run/log/hournal and other lines which don't correspond to my command in sh.
Consider adding -q to the scp and ssh commands to reduce the output they might produce. You can also redirect stderr and stdout to /dev/null as appropriate.
For example:
{
scp -q -i /home/dtlu/.ssh/key.key -o "ProxyCommand ssh -i /home/dtlu/.ssh/key.key lab#api.dev.test.com -W %h:%p" /home/dtlu/backup/test.sh lab#$k8s_node_ip:/tmp/
ssh -q -o StrictHostKeyChecking=no -i /home/dtlu/.ssh/key.key -o 'ProxyCommand ssh -i /home/dtlu/.ssh/key.key -W %h:%p lab#api.dev.test.com' lab#$k8s_node_ip "uname -a; date;echo "Dummy123!" | sudo -S bash -c 'echo 127.0.1.1 \`hostname\` >> /etc/hosts'; cd /tmp; pwd; systemctl status cachefilesd | grep Active; ls -ltr /tmp/test.sh; echo "Dummy123!" | sudo -Sv && bash -s < test.sh"
} >&/dev/null
I have written this script:
#!/bin/bash
SSH_USER=${SSH_USER:=$USER}
for department in A B C E L M V
do
mkdir -p ./resources/${div}
rsync -Pruzh --copy-links \
${SSH_USER}#server:${department}/foo/files \
${SSH_USER}#server:${department}/foo/photos \
./resources/${department}/foo
rsync -Pruzh \
${SSH_USER}#server:${department}/bar/documents \
./resources/${department}/bar
done
It works perfect except that I have to write my password 14 times which is not really practical.
I have heard of ssh_agent but for some reasons it does not work on my WSL.
Is there any alternative that I can use to type my password only once?
If you are using openssh, then you can set up a master connection and reuse it with something like:
DEST="${SSH_USER}#server"
TMPL=/tmp/sshctl/"%L-%r#%h:%p"
mkdir -p /tmp/sshctl
if ! ssh -nNf -o ControlMaster=yes -o ControlPath="${TMPL}" "${DEST}"; then
echo "# Failed to setup SSH ControlMaster. Aborting."
exit
fi
# ...
rsync -e "ssh -o 'ControlPath=${TMPL}'" ... "${DEST}":... ...
rsync -e "ssh -o 'ControlPath=${TMPL}'" ... "${DEST}":... ...
# ...
ssh -O exit -o ControlPath="${TMPL}" "${DEST}"
Be sure to secure the socket.
Best practice would be to set up SSH key pairs for automated authentication; i.e. create an SSH key pair and copy the public key to the server where these files are located, then use the private key in the rsync command: rsync -Pruzh --copy-links -e "ssh -i /path/to/private.key" .... This is fairly simple, secure, and gets rid of the prompt.
You can also use a utility like sshpass to enter the password in the prompt, but that kind of approach is less secure.
I am trying to get data from a file from a remote host and write to a log file locally using SSH. The log file tmp_results.log is not being created. Any ideas where I 'm going wrong please?
( ssh -nq -o StrictHostKeyChecking=no \
-i $PEM_PATH/$PEM_FILE $USER#${host} -p $REMOTE_PORT \
tail -n 6 $REMOTE_HOME/data/result.jtl | >> $SCRIPT_DIR/$project/tmp_results.log)
You seems a little bit confused by using pipes and redirections of filedescriptors.
Here you write in your logfile:
ssh -nq -o StrictHostKeyChecking=no \
-i $PEM_PATH/$PEM_FILE $USER#${host} -p $REMOTE_PORT \
tail -n 6 $REMOTE_HOME/data/result.jtl > $SCRIPT_DIR/$project/tmp_results.log
If you want to append the output on existing file just use:
ssh -nq -o StrictHostKeyChecking=no \
-i $PEM_PATH/$PEM_FILE $USER#${host} -p $REMOTE_PORT \
tail -n 6 $REMOTE_HOME/data/result.jtl >> $SCRIPT_DIR/$project/tmp_results.log
I have a bash script that’s using rsync to backup some files from my local desktop to a remote machine on my LAN.
I have the main script with some customisable variables in a separate .sh file to make for easy maintenance, deployment and git management.
So I have this dir structure
sync-backup-to-cp.sh
config/settings.sh
And the following code to include the config/settings.sh into the main sync-backup-to-cp.sh
#! /bin/bash
#load variables file
source /Users/enwhat/Dropbox/Flex/Scripts/mac/rysnc-backup-to-cp/config/settings.sh
However the imported variables aren’t behaving as expected. If I have any space in any of the variables it throws an error about the variables being invalid. It seems that bash is interpreting this oddly.
Ie. rsync_opts="--verbose --archive” will cause the script to break and run an error such as “invalid numeric arguments or unknown arguments supplied”. Where as rsync_opts="--verbose” runs perfectly.
To help illustrate the script I've taken some snippets of the code showing the flow with help so far
from: config / settings.sh
RSYNC_OPTS=( --bwlimit=1000 --verbose )
from my main script, there's a function call where these variables are passed in.
backup "$RSYNC_BIN" "$BACKUP_FILE_LIST" "$EXCLUDE_FILE_LIST" "$SSH_PORT" "$SSH_KEY" "$SOURCE" "$DESTINATION" "$RSYNC_OPTS[*]"
then the full function
function backup(){ #uses rsync to backup to server
#takes 8 args 1
#define local vars
local l_rsync_bin=$1
local l_rsync_backup_file_list=$2
local l_rsync_exclude_file_list=$3
local l_rsync_ssh_port=$4
local l_rsync_ssh_key=$5
local l_rsync_source=$6
local l_rsync_dest=$7
local l_rsync_opts=$8
#local l_time
#l_time=$(date)
#caffinate stops system from sleeping
echo ""$l_rsync_bin" "$l_rsync_opts" --verbose --archive --recursive --numeric-ids --human-readable --partial --progress --relative --itemize-changes --stats --rsync-path="sudo rsync" --delete-during --files-from="${l_rsync_backup_file_list}" --exclude-from="${l_rsync_exclude_file_list}" -e "ssh -q -p ${l_rsync_ssh_port} -i ${l_rsync_ssh_key}" "${l_rsync_source}" "${l_rsync_dest}""
caffeinate -s "$l_rsync_bin" "$l_rsync_opts" --verbose --archive --recursive --numeric-ids --human-readable --partial --progress --relative --itemize-changes --stats --rsync-path="sudo rsync" --delete-during --files-from="${l_rsync_backup_file_list}" --exclude-from="${l_rsync_exclude_file_list}" -e "ssh -q -p ${l_rsync_ssh_port} -i ${l_rsync_ssh_key}" "${l_rsync_source}" "${l_rsync_dest}"
}
Since you are quoting $rsync_opts, the entire value is passed as a single, whitespace-containing argument to rsync. In order for each option to be passed as a separate argument, you need to leave the parameter expansion unquoted:
rsync $rsync_opts
However, you can't include arguments that actually contain whitespace like this; all whitespace is treated by the shell as separating arguments. The right way to store arguments is to use an array:
rsync_opts=( --verbose --archive )
rsync "${rsync_opts[#]}"
It may not be necessary for your current use case, but it's a good idea to get into the habit of doing things the right way to avoid nasty surprises later.
For example,
local -a l_rsync_opts
l_rsync_opts=(--bwlimit=1000 --verbose --rsync-path="sudo rsync")
UPDATE: Based on your edit, you need to do the following:
backup ... "${RSYNC_OPTS[#]}" # #, not *
# Note the changes involving l_rsync_opts
function backup(){ #uses rsync to backup to server
#takes 8 args 1
#define local vars
local l_rsync_bin=$1
local l_rsync_backup_file_list=$2
local l_rsync_exclude_file_list=$3
local l_rsync_ssh_port=$4
local l_rsync_ssh_key=$5
local l_rsync_source=$6
local l_rsync_dest=$7
local l_rsync_opts=( "${#:8}" )
#local l_time
#l_time=$(date)
#caffinate stops system from sleeping
echo ""$l_rsync_bin" "${l_rsync_opts[#]}" --verbose --archive --recursive --numeric-ids --human-readable --partial --progress --relative --itemize-changes --stats --rsync-path="sudo rsync" --delete-during --files-from="${l_rsync_backup_file_list}" --exclude-from="${l_rsync_exclude_file_list}" -e "ssh -q -p ${l_rsync_ssh_port} -i ${l_rsync_ssh_key}" "${l_rsync_source}" "${l_rsync_dest}""
caffeinate -s "$l_rsync_bin" "${l_rsync_opts[#]}" --verbose --archive --recursive --numeric-ids --human-readable --partial --progress --relative --itemize-changes --stats --rsync-path="sudo rsync" --delete-during --files-from="${l_rsync_backup_file_list}" --exclude-from="${l_rsync_exclude_file_list}" -e "ssh -q -p ${l_rsync_ssh_port} -i ${l_rsync_ssh_key}" "${l_rsync_source}" "${l_rsync_dest}"
}
I have a script that try to mirror a specific directory from a local server to a remote one. It looks like that:
inotifywait -mr --format '%w%f' -e close_write -e moved_to -e delete /mydir | \
while read FILECHANGE
do
if [ -f $FILECHANGE ]
then
rsync --bwlimit=4096 --progress --relative -vrae 'ssh -p 22' $FILECHANGE $REMOTEHOST:/
else
ssh -p 22 $REMOTEHOST "rm $FILECHANGE"
fi
done
In case of multiple create of files, as for example a touch command:
touch 1 2 3
The 3 files are well transfered.
But if I delete several files at once:
rm -f 1 2 3
Only the first 1 is deleted.
If I replace the ssh command by just an echo $FILECHANGE, the 3 files are well displayed in the console. So it seems the problem come from the ssh command, but I can't explain why and solve it.
Anyone as an idea?
Well, I found the issue: it seems that the ssh command was eating the output of the inotifywait command when run. So, to prevent that, I add the 0<&- redirection after the ssh, to close the stdin.
inotifywait -mr --format '%w%f' -e close_write -e moved_to -e delete /mydir | \
while read FILECHANGE
do
if [ -f $FILECHANGE ]
then
rsync --bwlimit=4096 --progress --relative -vrae 'ssh -p 22' $FILECHANGE $REMOTEHOST:/
else
ssh -p 22 $REMOTEHOST "rm $FILECHANGE" 0<&-
fi
done
Now it works.