Running multiple remote commands via a single ssh command - shell

sshpass -p Password ssh -o StrictHostKeyChecking=no a.user#IP "Command1 ; Command2"
When i try to run multiple commands after ssh it only runs the 1st command and exits after that without executing others.
I am running the above command,
Could I improve on it to run all the commands sequentially?

Related

How to execute commands right after login using SSH connection? But after few seconds of login, it disconnect me from remote server

#!/bin/bash
expect <<END
spawn ssh -o user#remote_ip
expect "password"
send "my_pass\r"
expect eof
END
cd /var/www/html/node_project/
##npm init -y
npm install
##node index.js
I want run some commands right after login into remote server. I can login successfully, but 2 or 3 seconds later, it automatically logout me from the remote server. How can I make it wait until all of my commands run and execute successfully?
Use this:
sshpass -p "PASSWORD" ssh -o StrictHostKeyChecking=accept-new USERNAME#IP_ADDRESS "sleep 10 ; ls -l"
or this (for connection timeout):
sshpass -p "PASSWORD" ssh -o StrictHostKeyChecking=accept-new USERNAME#IP_ADDRESS 'nohup bash -c "sleep 3; ls -l"'
Note: replace ls -l with your command.
Here is my solution: I have run multiple commands in line and it executes all the commands one after another.
spawn ssh user#remote_ip "cd /var/www/html/node_project/" && "npm install" && "node index.js"

Can not execute script via ssh

I have remote machine and trying execute bash script to redeploy application after travis ci build is completed. I use sshpass to connect. But I cannot execute script.
echo "Starting deployment"
export SSHPASS=$PASSWORD
sshpass -e ssh -o stricthostkeychecking=no deploy-user#deploy-server.com "bash /opt/redeploy.sh"
And after this I got : No such file or directory after travic ci deploy phase.
But when I trying to execute this command :
sshpass -e ssh -o stricthostkeychecking=no deploy-user#deploy-server.com "touch /opt/myfile"
File successfully created. redeploy.sh is located in /opt directory and can be executed via terminal. But it can not be executed via this script.
Can anybody help me?
redeploy.sh has such content
#!/bin/bash
docker-compose -f /opt/docker-compose.yml stop
docker-compose -f /opt/docker-compose.yml pull
docker-compose -f /opt/docker-compose.yml up -d

How to copy echo 'x' to file during an ssh connection

I have a script which starts an ssh-connection.
so the variable $ssh start the ssh connection.
so $SSH hostname gives the hostname of the host where I ssh to.
Now I try to echo something and copy the output of the echo to a file.
SSH="ssh -tt -i key.pem user#ec2-instance"
When I perform a manual ssh to the host and perform:
sudo sh -c "echo 'DEVS=/dev/xvdbb' >> /etc/sysconfig/docker-storage-setup"
it works.
But when I perform
${SSH} sudo sh -c "echo 'DEVS=/dev/xvdb' > /etc/sysconfig/docker-storage-setup"
it does not seem to work.
EDIT:
Also using tee is working fine after performing an ssh manually but does not seem to work after the ssh in the script.sh
The echo command after an ssh of the script is happening on my real host (from where I'm running the script, not the host where I'm performing an ssh to). So the file on my real host is being changed and not the file on my host where I've performed an ssh to.
The command passed to ssh will be executed by the remote shell, so you need to add one level of quoting:
${SSH} "sudo sh -c \"echo 'DEVS=/dev/xvdb' > /etc/sysconfig/docker-storage-setup\""
The only thing you really need on the server is the writing though, so if you don't have password prompts and such you can get rid of some of this nesting:
echo 'DEVS=/dev/xvdb' | $SSH 'sudo tee /etc/sysconfig/docker-storage-setup'

Connect to multiple ssh connections through scripts

I have been trying to automatically enter a ssh connection using a script. This previous SOF post has helped me so far. Using one connection works (the first ssh statement). However, I want to create another ssh connection once connected, which I thought could look like this:
#! /bin/bash
# My ssh script
sshpass -p "MY_PASSWORD1" ssh -o StrictHostKeyChecking=no *my_hostname_1*
sshpass -p "MY_PASSWORD2" ssh -o StrictHostKeyChecking=no *my_hostname_2*
When running the script, I get only connected to the my_hostname_1 and the second ssh command is not run until I exit the first ssh connection.
I've tried using an if statement like this:
if [ "$HOSTNAME" = my_host_name_1 ]; then
sshpass -p "MY_PASSWORD2" ssh -o StrictHostKeyChecking=no *my_hostname_2*
fi
but I can't get any commands to be read until I exit the first connection.
Here is a ProxyCommand example as suggested by #lihao:
#!/bin/bash
sshpass -p "MY_PASSWORD2" ssh -o StrictHostKeyChecking=no \
-o ProxyCommand='sshpass -p "MY_PASSWORD1" ssh m_hostname_1 netcat -w 1 %h %p' \
my_hostname_2
You are proxying through the first host to get to the second. This assumes you have netcat installed on my_hostname_2. If not, you'll need to install it.
You can also set this up in your ~/.ssh/config file so you don't need the proxy stuff on the command line:
Host my_hostname_1
HostName my_hostname_1
Host my_hostname_2
HostName my_hostname_2
ProxyCommand ssh my_hostname_1 netcat -w 1 %h %p
However, this is a little trickier with the password handling. While you could put the sshpass here, it's not a great idea to have passwords in plain text. Using key based authentication might be better.
A Bash script is a sequence of commands.
echo moo
echo bar
will run echo moo and wait for it to complete, then run the next command.
You can run a remote command like this:
ssh remote echo moo
which will connect to remote, run the command, and exit. If there are additional commands in the script file after this, the shell which is executing these commands will continue with the next one, obviously on the host where you started the script.
To connect to one host from another, you could in principle do
ssh host1 ssh host2
but the proxy command suggested by #zerodiff improves on several aspects of the experience.

How to create a new terminal session and execute multiple commands

I'm looking for a way to automate the start-up of my development environment. I have three virtual machines that have to be started, then have to ssh to each of them and open VPN on them.
So far I've gotten them to start and managed to ssh to them:
#!/bin/sh
virsh start virtual_1
virsh start virtual_2
virsh start virtual_3
sleep 2m
gnome-terminal --title "virtual_3: server" -x ssh root#192.168.1.132 &
gnome-terminal --title "virtual_2: 11.100" -x ssh root#192.168.11.100 &
gnome-terminal --title "virtual_1: 12.100" -x ssh root#192.168.12.100 &
How do I execute an additional command in each of the terminals which starts openvpn?
For simplicity I'm trying to echo 1 in each terminal instead of starting VPN.
I've found that multiple commands on terminal start can be run like:
gnome-terminal -x bash -c "cmd1; cmd2"
So for one terminal to keep it simple I changed:
gnome-terminal --title "virtual_3: server" -x ssh root#192.168.1.132 &
to:
gnome-terminal --title "virtual_3: server" -x bash -c "ssh root#192.168.1.132 ; echo 1" &
But 1 wasn't printed in the terminal of virtual_3.
Then I thought, maybe the command is being executed too quickly, before the terminal is ready, so I tried adding &&:
gnome-terminal --title "virtual_3: server" -x bash -c "ssh root#192.168.1.132 &&; echo 1" &
But that gave no result either.
First of all, if you run
gnome-terminal -x bash -c "cmd1; cmd2"
you get bash to execute cmd1 and cmd2. It doesn't first execute cmd1 and then give cmd2 to its result. ssh is a program run in the terminal and your cmd2 won't be executed until that is finished.
So you need to run ssh and tell that to execute your command.
You can do so by:
ssh user#address "command_to_execute"
However, ssh exits after the command is finished. As you can see in "With ssh, how can you run a command on the remote machine without exiting?", you can execute ssh with the -t option so it doesn't quit:
ssh -t user#address "command_to_execute"
So your command in the end becomes:
gnome-terminal --title "virtual_3: server" -x bash -c "ssh -t root#192.168.1.132 'echo 1'"
You are right, giving -t alone is not enough (although necessary). -t allocates buffer for the tty but doesn't execute bash for you. From the manual of ssh:
-t Force pseudo-tty allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty.
If command is specified, it is executed on the remote host instead of a login shell.
So what you need is, to execute bash your self. Therefore:
ssh -t user#address "command_to_execute; bash"

Resources