I have to reproduce these steps with Ansible:
Backup: sshpass -p $PASSWORD ssh $USER#$IP save_configuration -p $PASSWORD> $FILE
Restore: cat $FILE | sshpass -p $PASSWORD ssh $USER#$IP restore_configuration -p $PASSWORD
For the 1st one, I could storage the dump. But for the 2nd one, is there any way to move that "cat" in a fancy way?
Thank you!
Related
I have tried the following steps to set ssh passwordless (SSH Key Pair Authentication)login.
Set ip and port in bash.
ip="xxxx"
port="xxxx"
Set ssh config file on client side
cat > $HOME/.ssh/config <<EOF
Host $ip
IdentityFile $HOME/.ssh/id_rsa
User root
EOF
Create a ssh key pair on client side
ssh-keygen -t rsa -f $HOME/.ssh/id_rsa -q -b 2048 -N ""
Push id_rsa into ssh server from client side.
Prepare for ssh server
ssh -p $port root#$ip "mkdir -p /root/.ssh"
Push authorized file into ssh server
scp -P $port id_rsa.pub root#$ip:/root/.ssh/authorized_keys
Set permission for authorized file
ssh -p $port root#$ip "chmod 700 .ssh; chmod 640 .ssh/authorized_keys"
Succeeded!
Now i want to write all steps into a one-click bash script for the job.
Here is my try.
#! /bin/bash
ip="xxxx"
port="xxxx"
pass="yyyy"
cat > $HOME/.ssh/config <<EOF
Host $ip
IdentityFile $HOME/.ssh/id_rsa.bwg_root
User root
EOF
ssh-keygen -t rsa -f $HOME/.ssh/id_rsa.bwg_root -q -b 2048 -N ""
cd $HOME/.ssh
/usr/bin/expect <<EOF
spawn ssh -p $port root#$ip "mkdir -p /root/.ssh"
expect "password:"
send "$pass\r"
spawn scp -P $port id_rsa.pub root#$ip:/root/.ssh/authorized_keys
expect "password:"
send "$pass\r"
spawn ssh -p $port root#$ip "chmod 700 .ssh; chmod 640 .ssh/authorized_keys"
expect "password:"
send "$pass\r"
EOF
It got the following output info:
spawn ssh -p xxxx root#yyyy mkdir -p /root/.ssh
root#yyyy's password: spawn scp -P xxxx id_rsa.bwg.pub root#yyyy:/root/.ssh/authorized_keys
root#yyyy's password: spawn ssh -p xxxx root#yyyy chmod 700 .ssh; chmod 640 .ssh/authorized_keys
Why and how to fix it?
I'd simplify it with sshpass.
#!/bin/bash
ip="x.x.x.x"
port="xx"
export SSHPASS="yyy"
cat >$HOME/.ssh/config <<EOF
Host $ip
IdentityFile $HOME/.ssh/id_rsa.bwg_root
User root
EOF
ssh-keygen -t rsa -f "$HOME/.ssh/id_rsa.bwg_root" -q -b 2048 -N ""
cd "$HOME/.ssh" || exit 1
sshpass -e ssh -oStrictHostKeyChecking=no -p "$port" "root#$ip" "mkdir -p -m 700 /root/.ssh"
sshpass -e scp -oStrictHostKeyChecking=no -P "$port" id_rsa.bwg_root.pub "root#$ip:/root/.ssh/authorized_keys"
sshpass -e ssh -oStrictHostKeyChecking=no -p "$port" "root#$ip" "chmod 640 .ssh/authorized_keys"
Btw: I replaced last id_rsa.pub with id_rsa.bwg_root.pub and added -m 700 to mkdir and removed chmod 700 .ssh.
Use ssh-copy-id to push the new key to the remote host. You'll need to enter the password for that login, of course, but it's the last time you'll have to use it.
#!/bin/bash
ip="x.x.x.x"
port="xx"
id_file=$HOME/.ssh/id_rsa_$ip
cat > $HOME/.ssh/config <<EOF
HOST $ip
IdentityFile $id_file
User root
EOF
ssh-keygen -t rsa -f "$HOME/.ssh/id_rsa_$ip" -q -b 2048 -N ""
ssh-copy-id -i "$id_file" -p "$port" root#"$ip"
As a general rule, always look for a non- (or less) interactive solution using existing tools before trying except.
I have written this script:
#!/bin/bash
SSH_USER=${SSH_USER:=$USER}
for department in A B C E L M V
do
mkdir -p ./resources/${div}
rsync -Pruzh --copy-links \
${SSH_USER}#server:${department}/foo/files \
${SSH_USER}#server:${department}/foo/photos \
./resources/${department}/foo
rsync -Pruzh \
${SSH_USER}#server:${department}/bar/documents \
./resources/${department}/bar
done
It works perfect except that I have to write my password 14 times which is not really practical.
I have heard of ssh_agent but for some reasons it does not work on my WSL.
Is there any alternative that I can use to type my password only once?
If you are using openssh, then you can set up a master connection and reuse it with something like:
DEST="${SSH_USER}#server"
TMPL=/tmp/sshctl/"%L-%r#%h:%p"
mkdir -p /tmp/sshctl
if ! ssh -nNf -o ControlMaster=yes -o ControlPath="${TMPL}" "${DEST}"; then
echo "# Failed to setup SSH ControlMaster. Aborting."
exit
fi
# ...
rsync -e "ssh -o 'ControlPath=${TMPL}'" ... "${DEST}":... ...
rsync -e "ssh -o 'ControlPath=${TMPL}'" ... "${DEST}":... ...
# ...
ssh -O exit -o ControlPath="${TMPL}" "${DEST}"
Be sure to secure the socket.
Best practice would be to set up SSH key pairs for automated authentication; i.e. create an SSH key pair and copy the public key to the server where these files are located, then use the private key in the rsync command: rsync -Pruzh --copy-links -e "ssh -i /path/to/private.key" .... This is fairly simple, secure, and gets rid of the prompt.
You can also use a utility like sshpass to enter the password in the prompt, but that kind of approach is less secure.
This question already has answers here:
write a shell script to ssh to a remote machine and execute commands
(10 answers)
Closed 9 years ago.
I'm writing a script which purpose is to connect to a number of servers and create an account. The "core" is:
ssh user#ip
sudo su -
useradd -m -p 123 $1
if [ $? -eq 0 ]; then
echo "$1 successfully created on ip."
fi
chage -d 0 $1
chown -R $1 /home/$1
exit #exit root
exit #exit the server
I have established a private-public key relationship between the servers in order to be able to perform the ssh without being prompted for the password, however, when I run the script it does the ssh but then doesn't perform the next commands on the target machine. Instead, when manually exiting from the target server, I see that those commands were executed (or better said, tried to be executed) on the local machine.
So there should be no asking password when run both ssh and sudo command
ssh user#ip bash -c "'
sudo su -
useradd -m -p 123 $1
if [ $? -eq 0 ]; then
echo "$1 successfully created on ip."
fi
chage -d 0 $1
chown -R $1 /home/$1
exit #exit root
exit #exit the server
'"
If you are planning to sudo why don't you just ssh as root: root#ip? Just do:
ssh root#ip 'command1; command2; command3'
In your case if you want to be sure they are all successfull in order to proceed:
ssh root#ip 'USER=someUser; useradd -m -p 123 $USER && chage -d 0 $USER && chown -R $USER /home/$USER'
EDIT:
If the root access is not alowed if would do the following:
Create the script with the commands you want to execute on the remote machine, for instance script.sh:
#!/bin/bash
USER=someUser
useradd -m -p 123 $USER && chage -d 0 $USER && chown -R $USER /home/$USER
Copy the script to the remote machine:
scp script.sh user#ip:/destination/dir
Invoke it remotely:
ssh user#ip 'sudo /destination/dir/script.sh'
EDIT2:
Other option without creating any files:
ssh user#ip "sudo bash -c 'USER=someUser && useradd -m -p 123 $USER && chage -d 0 $USER && chown -R $USER /home/$USER'"
It won't work this way. You shoudl do it like:
ssh user#ip 'yourcommands ; listed ; etc.' or
copy the script you want to execute on the servers via scp /your/scriptname user#ip:/tmp/ then execute it ssh user#ip 'sh /tmp/yourscriptname'
But you are starting another script when starting sudo.
Now you have (at least) two options:
ssh user#ip 'sudo -s -- "yourcommands ; listed ; etc."' or
copy the part after the sudo to a different script, then:
ssh user#ip 'sudo -s -- "sh differentscript"'`
I am trying to run a sshpass command inside a bash script but it isn't working.
If I run the same command from the terminal it works fine but running it in a bash script it doesn't.
#! /bin/bash
sshpass -p 'password' ssh user#host command
I am aware of the security issues but its not important now.
Can someone help? Am I missing something.
Thanks
Try the "-o StrictHostKeyChecking=no" option to ssh("-o" being the flag that tells ssh that your are going to use an option). This accepts any incoming RSA key from your ssh connection, even if the key is not in the "known host" list.
sshpass -p 'password' ssh -o StrictHostKeyChecking=no user#host 'command'
Do which sshpass in your command line to get the absolute path to sshpass and replace it in the bash script.
You should also probably do the same with the command you are trying to run.
The problem might be that it is not finding it.
1 - You can script sshpass's ssh command like this:
#!/bin/bash
export SSHPASS=password
sshpass -e ssh -oBatchMode=no user#host
2 - You can script sshpass's sftp commandlike this:
#!/bin/bash
export SSHPASS=password
sshpass -e sftp -oBatchMode=no -b - user#host << !
put someFile
get anotherFile
bye
!
I didn't understand how the accepted answer answers the actual question of how to run any commands on the server after sshpass is given from within the bash script file. For that reason, I'm providing an answer.
After your provided script commands, execute additional commands like below:
sshpass -p 'password' ssh user#host "ls; whois google.com;" #or whichever commands you would like to use, for multiple commands provide a semicolon ; after the command
In your script:
#! /bin/bash
sshpass -p 'password' ssh user#host "ls; whois google.com;"
This worked for me:
#!/bin/bash
#Variables
FILELOCAL=/var/www/folder/$(date +'%Y%m%d_%H-%M-%S').csv
SFTPHOSTNAME="myHost.com"
SFTPUSERNAME="myUser"
SFTPPASSWORD="myPass"
FOLDER="myFolderIfNeeded"
FILEREMOTE="fileNameRemote"
#SFTP CONNECTION
sshpass -p $SFTPPASSWORD sftp $SFTPUSERNAME#$SFTPHOSTNAME << !
cd $FOLDER
get $FILEREMOTE $FILELOCAL
ls
bye
!
Probably you have to install sshpass:
sudo apt-get install sshpass
I have two line need to repeat doing in for loop
ssh tam#192.168.174.43 mkdir -p $location
scp -r $i tam#192.168.174.43:$location
but each time it need to input password, how can i change code then just need input one time or more fast way
You can use public/private key generation method using ssh-keygen (https://help.ubuntu.com/community/SSH/OpenSSH/Keys)
And then use the below script.
for VARIABLE in dir1 dir2 dir3
do
ssh tam#192.168.174.43 mkdir -p $location
scp -r $i tam#192.168.174.43:$location
done
Alternative solution :
You can use sshpass
for VARIABLE in dir1 dir2 dir3
do
ssh tam#192.168.174.43 mkdir -p $location sshpass -p '<password>' <command>
scp -r $i tam#192.168.174.43:$location sshpass -p '<password>' <command>
done
While public/private keys is the easiest option, without need to change the existing script, there is another option, of using sshfs. sshfs may not be installed by default.
With this approach, you basically mount the remote file system to a local directory, over ssh protocol. Then you can simply use commands like mkdir / cp etc.
NOTE: These command are from YOUR system & not from REMOTE system.
Mounting over ssh is a one time job, which will require your manual intervention. Do this before running the script.e.g. for your case:
mkdir /tmp/tam_192.168.174.43
sshfs tam#192.168.174.43:/ /tmp/tam_192.168.174.43
tam#192.168.174.43's password: <ENTER PASSWORD HERE>
& then, in your script, use simple commands like:
mkdir -p /tmp/tam_192.168.174.43/$location
cp -r $i /tmp/tam_192.168.174.43/$location
& to unmount:
fusermount -u /tmp/tam_192.168.174.43