I am trying to write a deployment script which after copying the new release to the server should perform a few sudo commands on the remote machine.
#!/bin/bash
app=$1
echo "Deploying application $app"
echo "Copy file to server"
scp -pr $app-0.1-SNAPSHOT-jar-with-dependencies.jar nuc:/tmp/
echo "Execute deployment script"
ssh -tt stefan#nuc ARG1=$app 'bash -s' <<'ENDSSH'
# commands to run on remote host
echo Hello world
echo $ARG1
sudo ifconfig
exit
ENDSSH
The file gets copied correctly and the passed argument printed as well. But the prompt for the password shows for two seconds then it says "Sorry, try again" and the second prompt shows the text I enter in plain text (meaning not masked) but also does not work if I enter the password correctly.
stefan#X220:~$ ./deploy.sh photos
Deploying application photos
Copy file to server
photos-0.1-SNAPSHOT-jar-with-dependencies.jar 100% 14MB 75.0MB/s 00:00
Execute deployment script
# commands to run on remote host
echo Hello world
echo $ARG1
sudo ifconfig
exit
stefan#nuc:~$ # commands to run on remote host
stefan#nuc:~$ echo Hello world
Hello world
stefan#nuc:~$ echo $ARG1
photos
stefan#nuc:~$ sudo ifconfig
[sudo] password for stefan:
Sorry, try again.
[sudo] password for stefan: ksdlgfdkgdfg
I tried leaving out the -t flags for ssh as well as using -S for sudo which did not help. Any help is highly appreciated.
What I would do :
ssh stefan#nuc bash -s foobar <<'EOF'
echo "arg1 is $1"
echo "$HOSTNAME"
ifconfig
exit
EOF
Tested, work well.
Notes :
for the trick to work, use ssh key pair instead of using a password, it's even more secure
take care of the place of your bash -s argument. Check how I pass it
no need -tt at all
no need sudo to execute ifconfig and better use ip a
I came up with another solution: Create another file with the script to execute on the remote server. Then copy it using scp and in the calling script do a
ssh -t remoteserver sudo /tmp/deploy_remote.sh parameter1
This works as expected. Of course the separate file is not the most elegant solution, but -t and -tt did not work when inlining the script to execute on the remote machine.
Related
I'm getting incredibly frustrated here. I simply want to run a sudo command on a remote SSH connection and perform operations on the results I get locally in my script. I've looked around for close to an hour now and not seen anything related to that issue.
When I do:
#!/usr/bin/env bash
OUT=$(ssh username#host "command" 2>&1 )
echo $OUT
Then, I get the expected output in OUT.
Now, when I try to do a sudo command:
#!/usr/bin/env bash
OUT=$(ssh username#host "sudo command" 2>&1 )
echo $OUT
I get "sudo: no tty present and no askpass program specified". Fair enough, I'll use ssh -t.
#!/usr/bin/env bash
OUT=$(ssh -t username#host "sudo command" 2>&1 )
echo $OUT
Then, nothing happens. It hangs, never asking for the sudo password in my terminal. Note that this happens whether I send a sudo command or not, the ssh -t hangs, period.
Alright, let's forget the variable for now and just issue the ssh -t command.
#!/usr/bin/env bash
ssh -t username#host "sudo command" 2>&1
Then, well, it works no problem.
So the issue is that ssh -t inside a variable just doesn't do anything, but I can't figure out why or how to make it work for the life of me. Anyone with a suggestion?
If your script is rather concise, you could consider this:
#!/usr/bin/env bash
ssh -t username#host "sudo command" 2>&1 \
| ( \
read output
# do something with $output, e.g.
echo "$output"
)
For more information, consider this: https://stackoverflow.com/a/15170225/10470287
I have a script which starts an ssh-connection.
so the variable $ssh start the ssh connection.
so $SSH hostname gives the hostname of the host where I ssh to.
Now I try to echo something and copy the output of the echo to a file.
SSH="ssh -tt -i key.pem user#ec2-instance"
When I perform a manual ssh to the host and perform:
sudo sh -c "echo 'DEVS=/dev/xvdbb' >> /etc/sysconfig/docker-storage-setup"
it works.
But when I perform
${SSH} sudo sh -c "echo 'DEVS=/dev/xvdb' > /etc/sysconfig/docker-storage-setup"
it does not seem to work.
EDIT:
Also using tee is working fine after performing an ssh manually but does not seem to work after the ssh in the script.sh
The echo command after an ssh of the script is happening on my real host (from where I'm running the script, not the host where I'm performing an ssh to). So the file on my real host is being changed and not the file on my host where I've performed an ssh to.
The command passed to ssh will be executed by the remote shell, so you need to add one level of quoting:
${SSH} "sudo sh -c \"echo 'DEVS=/dev/xvdb' > /etc/sysconfig/docker-storage-setup\""
The only thing you really need on the server is the writing though, so if you don't have password prompts and such you can get rid of some of this nesting:
echo 'DEVS=/dev/xvdb' | $SSH 'sudo tee /etc/sysconfig/docker-storage-setup'
I have been trying to automatically enter a ssh connection using a script. This previous SOF post has helped me so far. Using one connection works (the first ssh statement). However, I want to create another ssh connection once connected, which I thought could look like this:
#! /bin/bash
# My ssh script
sshpass -p "MY_PASSWORD1" ssh -o StrictHostKeyChecking=no *my_hostname_1*
sshpass -p "MY_PASSWORD2" ssh -o StrictHostKeyChecking=no *my_hostname_2*
When running the script, I get only connected to the my_hostname_1 and the second ssh command is not run until I exit the first ssh connection.
I've tried using an if statement like this:
if [ "$HOSTNAME" = my_host_name_1 ]; then
sshpass -p "MY_PASSWORD2" ssh -o StrictHostKeyChecking=no *my_hostname_2*
fi
but I can't get any commands to be read until I exit the first connection.
Here is a ProxyCommand example as suggested by #lihao:
#!/bin/bash
sshpass -p "MY_PASSWORD2" ssh -o StrictHostKeyChecking=no \
-o ProxyCommand='sshpass -p "MY_PASSWORD1" ssh m_hostname_1 netcat -w 1 %h %p' \
my_hostname_2
You are proxying through the first host to get to the second. This assumes you have netcat installed on my_hostname_2. If not, you'll need to install it.
You can also set this up in your ~/.ssh/config file so you don't need the proxy stuff on the command line:
Host my_hostname_1
HostName my_hostname_1
Host my_hostname_2
HostName my_hostname_2
ProxyCommand ssh my_hostname_1 netcat -w 1 %h %p
However, this is a little trickier with the password handling. While you could put the sshpass here, it's not a great idea to have passwords in plain text. Using key based authentication might be better.
A Bash script is a sequence of commands.
echo moo
echo bar
will run echo moo and wait for it to complete, then run the next command.
You can run a remote command like this:
ssh remote echo moo
which will connect to remote, run the command, and exit. If there are additional commands in the script file after this, the shell which is executing these commands will continue with the next one, obviously on the host where you started the script.
To connect to one host from another, you could in principle do
ssh host1 ssh host2
but the proxy command suggested by #zerodiff improves on several aspects of the experience.
I'm trying to write a bash script to migrate database from remote server to local. One of our servers is unfortunately windows server. I installed freesshd so I can use ssh.
When I run this from my ubuntu shell:
sshpass -p 'my_password' ssh user#host
'C:/wamp/bin/mysql/mysql5.1.36/bin/mysqldump
-u root -pmypassword mybase --result-file=C:/wamp/outfiles/mybase.sql'
It runs fine and it dumps the base.
Unfortunately when I do the same thing from the bash script - I got the permission denied feedback. Why? Is there any difference between command from script and regular shell command?
This is my bash script now:
#!/bin/bash
remoteHost=$1
remoteUser=$2
echo -n "Provide remote db password: "
read -s remoteDbPass
echo ""
echo -n "Provide remote server password: "
read -s remotePass
echo ""
dbName=$3
localDbName=$4
dumpPath=/var/lib/mysql/dumps/
winMysqlPath=C:/wamp/bin/mysql/mysql5.1.36/bin/
winDumpPath=C:/wamp/outfiles/
sshpass -p '$remotePass' ssh $remoteUser#$remoteHost '${winMysqlPath}mysqldump -u root -p$remoteDbPass $dbName --result-file=$winDumpPath$dbName.sql'
pscp -pw $remotePass $remoteUser#$remoteHost:$winDumpPath$dbName.sql $dumpPath$dbName.sql
mysql $localDbName < $dumpPath$dbName.sql
variable quoted with single quote ' will not be expanded.
use double quote " instead.
I am trying to write a script that appends a line to the /etc/hosts, which means I need sudoer privileges. However, if I run the script from the desktop it does not prompt for a password. I simply get permission denied.
Example script:
#!/bin/bash
sudo echo '131.253.13.32 www.google.com' >> /etc/hosts
dscacheutil -flushcache
A terminal pops up and says permission denied, but never actually prompts for the sudo password. Is there a way to fix this?
sudo doesn't apply to the redirection operator. You can use either echo | sudo tee -a or sudo bash -c 'echo >>':
echo 131.253.13.32 www.google.com | sudo tee -a /etc/hosts
sudo bash -c 'echo 131.253.13.32 www.google.com >> /etc/hosts'
What you are doing here is effectively:
Switch to root, and run echo
Switch back to yourself and try to append the output of sudo onto
/etc/hosts
That doesn't work because you need to be root when you're appending to /etc/hosts, not when you're running echo.
The simplest way to do this is
sudo bash -c "sudo echo '131.253.13.32 www.google.com' >> /etc/hosts"
which will run bash itself as root. However, that's not particularly safe, since you're now invoking a shell as root, which could potentially do lots of nasty stuff (in particular, it will execute the contents of the file whose name is in the environment variable BASH_ENV, if there is one. So you might prefer to do this a bit more cautiously:
sudo env -i bash -c "sudo echo '131.253.13.32 www.google.com' >> /etc/hosts"