My docker container has two services: a web service and a SSH server.
The SSH server is openssh-server and I need to run the command docker exec -it my-container sudo service ssh restart from outside the container to start the SSH server.
However, the command doesn't succeed all the time. Every time I need to manually check if the SSH server is up in the container using the command: ssh root#localhost:
1) If the SSH server fails to start, the result is ssh_exchange_identification: Connection closed by remote host
2) Otherwise, it asks for the password. (Which indicates that the SSH server is up)
Since I have to deploy multiple containers at the same time, it is unrealistic to check every container manually. Therefore, I want to retry the docker exec -it my-container sudo service ssh restart command automatically if the SSH serve fails to start. But I am not sure how to write the bash script to achieve this. It basically should work like this:
while (ssh_server_fails_to_start):
docker exec -it my-container sudo service ssh restart
Any comments or ideas are appreciated. Thanks in advance!
If the sshd is up an running, it will accept connections on its certain port. Otherwise, the connection attempt will fail.
If you run the following command:
ssh -o PasswordAuthentication=No root#localhost true
This will fail in either way, but the output will be different. In the case that the server is running and accepting connections, the explicit switch-off of password authentication will make it fail with this message:
Permission denied (publickey,password).
Otherwise it will print out a message like this:
ssh: connect to host localhost port 22: Connection refused
So I propose to scan the error message for a hint like this:
if ssh -o PasswordAuthentication=No root#localhost true \
|& grep -q "Connection refused"
then
echo "No server reachable!"
else
echo "Server reachable."
fi
So you could write your script like this:
while ssh -o PasswordAuthentication=No root#localhost true \
|& grep -q "Connection refused"
do
docker exec -it my-container sudo service ssh restart
done
You might want to add some sleep delays to avoid hurried restarts. Maybe the ssh server just needs some time to accept connections after being restarted.
To test the ssh connection, we can use sshpass package to provide a password in the command line.
while : ; do
docker exec -it my-container sudo service ssh restart
sleep 5s
sshpass -p 'root' ssh -q root#localhost -p 2222 exit
if [ $? == 0 ]; then
echo "SSH server is running."
break
fi
echo "SSH server is not running. Restarting..."
done
Related
I am writing a script to upload DLLs to a remote machine. I first use a command to ssh in and stop the service running. I then try to upload the new DLLs via scp, but it fails basically immediately with lost connection.
My entire shell script looks like this:
ssh -p 29170 DLL_Uploader#XXX.XXX.XX.XXX "powershell.exe; .\stop_file_watcher.ps1; exit";
scp -P 29170 $1 "DLL_Uploader#XXX.XXX.XX.XXX:/Bin/File Watcher Service"
ssh -p 29170 DLL_Uploader#XXX.XXX.XX.XXX "powershell.exe; .\start_file_watcher.ps1; exit";
You can see a pastebin of this scp debug output with -vvv here.
Need some help with a tricky SSH tunnel through a bastion host.
I want to port forward Postgres on the remote server, through the bastion. Our company setup only allows communication over SSH, so we have to port forward everything.
Currently, I use the CLI command to set up the SSH tunnel, then use psql shell command on my laptop to query the remote Postgres. I want to write this same connection in Go, so I can create reports, graphs, etc.
The following command line works, but I can't figure out how to do this with Go SSH.
ssh -o ProxyCommand="ssh -l username1 -i ~/.ssh/privKey1.pem bastionIP -W %h:%p" -i ~/.ssh/privKey2.pem -L 8080:localhost:5432 -N username2#PsqlHostIP
psql -h localhost -P 8000 -U user -W pass
I am trying to run the needrestart tool by ansible to check for processes with outdated libraries.
When I run needstart with the command or shell modules from ansible it says that I need to restart my ssh daemon. When I run needrestart manually it says that there are no processes with outdated libraries.
When I restart the ssh daemon it does not make a difference. But after rebooting the remote server the ssh daemon is not listed as a service I should restart anymore.
So I really do not understand the difference between the ssh connection from ansible and my manual ssh connection that causes the different behavior of needrestart.
Any help would be appreciated!
Thank you in advance and best regards
Max
My local machine
$ python -V
Python 2.7.13
$ ansible --version
ansible 2.2.0.0
$ cat ansible.cfg
[defaults]
inventory = hosts
ask_vault_pass = True
retry_files_enabled = False
I am using a ssh proxy to connect to the server:
ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p -q user#jumphost.example.com"'
The remote server
$ cat /etc/debian_version
8.6
$ python -V
Python 2.7.9
Using ansible
$ ansible example.com -m command -a 'needrestart -b -l -r l'
Vault password:
example.com | SUCCESS | rc=0 >>
NEEDRESTART-VER: 1.2
NEEDRESTART-SVC: ssh.service
$ ansible example.com -m shell -a 'needrestart -b -l -r l'
Vault password:
example.com | SUCCESS | rc=0 >>
NEEDRESTART-VER: 1.2
NEEDRESTART-SVC: ssh.service
Using SSH
$ ssh example.com 'needrestart -b -l -r l'
NEEDRESTART-VER: 1.2
Killed by signal 1.
It looks like you have an active connection with older version of ssh process. When ssh restarts it does not terminate current copies which keeps active connections. If it would do this, than ssh servers sudo service ssh restart would kill active connection and you'll have a broken server.
So, when you do systemctl restart sshd, you restart only ssh-part, which accepts new connection. All existing connections are served by old ssh.
Why do ansible keep ssh old ssh connection between runs? Because of the ControlMaster feature. It keeps active ssh connection between runs to speed up new runs.
What to do? Close active ssh connections on your machine. Try ps aux|grep ssh and you'll see a process which serves as ControlMaster. Kill it, and outdated connection should be closed.
I tried this:
#!bin/bash
ssh user#host.com 'sudo /etc/init.d/script restart'
But I get this error:
sudo: no tty present and no askpass program specified
How can I run that script? Now when I need to run that script I do these steps:
ssh user#host.com
sudo /etc/init.d/script restart
But I don't want to manually log in to remote server all the time and then enter restart command.
Can I write local script that I could run so I would only need to enter password and it would run remote script and restart the process?
You can use -t option in ssh command to attach a pseudo-tty with your ssh command:
ssh -t -t user#host.com 'sudo /etc/init.d/script restart'
As per man ssh:
-t Force pseudo-tty allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty.
I have written a bash script which I should run on the remote server(ubuntu) with GUI(zenity) interface and I will issue below command on the local machine.
sshpass -p $PASS ssh root#$SERVER 'bash' < /tmp/dep.sh | tee >(zenity --progress --title "Tomcat Deployer" --text "Connecting to Tomcat Server..." --width=400 --height=150) >>/tmp/temp.log;
I want to transfer a file from my local machine to server and I want to achieve this placing an enter in bash file(/tmp/dep.sh) in the above command itself without opening a new session on server.
I prefer below command to transfer the file to server and I should place this in the bash script(/tmp/dep.sh) and it should run on server to copy the file from my local machine. I don't want to specify my local ip as a variable and use as source in the blow command as the script is used on other machines too and thus ip changes. And I should not transfer the file from my local to server writing a separate rsync & ssh creating one more ssh session.
rsync --rsh="sshpass -p '$PASS' ssh" '$local:$APPATH/$app.war' /tmp
Anybody can do any magic to transfer the file from local to server with the above connected ssh session with the help of above rsync or by other means and without opening new separate connection?
Thank you!
Edit 1:
Could this be achieved with single ssh session(single command)?:
rsync --rsh="sshpass -p serverpass ssh -o StrictHostKeyChecking=no" /home/user1/Desktop/app.war root#192.168.1.5:/tmp;
sshpass -p serverpass ssh -o StrictHostKeyChecking=no root#192.168.1.5 '/etc/init.d/tomcat start'
You'll want to use SSH multiplexing. This is done using the ControlMaster and ControlPath options. Here's an article on it.