Background process doesn't work after closing the ssh - bash

I'm using an SSH script for remote deployment. After push to the master branch the repository connects via SSH and calls deploy.sh which should run in the background so as not to waste the devops node's running time. Inside the script, docker restarts and pulls from the repo. I tried nohup, &, disown - but the result remains the same: the script only works correctly if it terminates within the SSH console. If we close the SSH connection, nothing happens, as if we never called the script. Why would this happen?

Related

Bash script to send commands to remote ssh session

Is it possible to write a bash script that opens a remote node (i.e. through ssh and/or slurm) and starts an interactive session there after running some commands? I'm trying to automate the process of starting a jupyter session on a remote computing cluster, which currently looks like this:
ssh into a login node of the remote cluster, using a specific port
use slurm to request an interactive session on one of the compute nodes, including x11 forwarding through that port
change directory to the working directory
activate conda environment for my project
open jupyter from the command line, specifying the port I used previously
It's a lengthy process, and if I get something wrong at any step I usually have to go back and start from the beginning because the port I'm using is still tied up. So I'm looking for a way I can run a single script (possibly with arguments) from my local machine that jumps through all the hoops to get me a working jupyter session with a link I can paste to my browser.
Like #Diego Torres Milano said, you would need to write a script locally that could do the interactive part, then invoke that via a remote script.
But since your process is interactive, this gets tricky. Luckily, linux has a tool which can easily be installed via a package manager called expect which has the ability to write logic to execute multi-step interactive scripts.
So you would write an expect script which would "expect" certain prompts, then it can read those prompts and use conditional logic respond to those prompts appropriately.
Once you have this written and it works locally, it's just a matter of executing it via ssh from a remote server as:
ssh user#12.34.56.78 /path/to/script.ex

Is there a way to get my laptop to beep from within a bash script running on a remote server via SSH?

I have a bash script that I have to regularly run on a remote server. Part of the script includes running a backup which takes a while, and after it has run, I have to hit "Y" to confirm that the backup worked before the script will continue.
I would like to know if there is a way to get my laptop to make a beep (or some sort of sound) when that happens. I know that echo -e '\a' makes a beep, but if I run it from within a script on the remote server, the beep happens on the remote server.
I have control of the script that is being run, so I could easily change it to do something special.
You could send the command through ssh back to your computer like:
ssh user#host "echo -e '\a'"
Just make sure you have ssh key authentication from your server to your computer so the command can run smoothly
In my case the offered solutions with echo didn't work. I'm using a macbook and connect to an ubuntu system. I keep the terminal open and I'd like to be informed when a long running bash script is ready.
What I did notice is that if I shutdown the remote system then it will beep the macbook and show an alarm icon on the relevant tab. So I have now implemented a bit of dirty workaround:
sudo shutdown 1440 && shutdown -c
This will initiate the system to shutdown and will immediately cancel the request. And I do get the alarm beep + icon. You will need to setup sudo to allow the user to permit shutdown. As it was my own remote server it was no problem but could limit the usability for others.

Can't terminate node(js) process without terminating ssh server in docker container

I'm using a Dockerfile that ends with a CMD ["/start.sh"]:
#!/bin/bash
service ssh start
/usr/bin/node /myApp/app.js
if for some reason i need to kill the node process, the ssh server is being closed as well (forces me to reboot the container to reconnect).
Any simple way to avoid this behavior?
Thank You.
The container exits as soon as main process of the container exits. In your case, the main process inside the container is start.sh shell script. The start.sh shell script is starting the ssh service and then running the nodejs process as child process. Once the nodejs process dies, the shell script exits as well and so the container exits. So what you can do is to put the nodejs process in background.
#!/bin/bash
service ssh start
/usr/bin/node /myApp/app.js &
# Need the following infinite loop as the shell script should not exit
while do:
sleep 2
done
I DO NOT recommend this approach though. You should have only a single process per container. Read the following answers to understand why -
Running multiple applications in one docker container
If you still want to run multiple processes inside container, there are better ways to do it like using supervisord - https://docs.docker.com/config/containers/multi-service_container/

Keep ssh tunnel open after running script

I have a device with intermittent connectivity that "calls home" to open a reverse tunnel to allow me to SSH into it. This works very reliably started by systemd and automatically restarted on any exit:
ssh -R 1234:localhost:22 -N tunnel-user#reliable.host
Now however I want to run a script on the reliable-host on connect. This is easy enough with a simple change to the ssh ... cli: swap -N for a script name on the remote reliable-host:
ssh -R 1234:localhost:22 tunnel-user#reliable.host ./on-connect.sh
The problem is that once the script exits, it closes the tunnels if they're not in use.
One workaround I've found is to put a long sleep at the end of my script. This however leaves sleep processes around after the connection drops since sleep doesn't respond to SIGHUP. I could put a shorter sleep in an infinite loop (I think) but that feels hacky.
~/on-connect.sh
#!/bin/bash
# Do stuff...
sleep infinity
How can I get ssh to behave like -N has been used so that it stays connected with no activity but also runs a script on initial connection? Ideally without needing to have a special sleep (or equivalent) in the remote script but, if not possible, proper cleanup on the reliable-host when the connection drops.

Script invoked from remote server unable to run service correctly

I have a unix script that invokes another script on a remote unix server.
amongst other commands i am stopping a service. The stop command essentially translates to
ssh -t -t -q ${AEM_USER}#${SERVERIP} 'bash -l -c "service aem stop"'
The service is getting stopped but when i start back the service it just creates the .pid file and does not perform the start up. When i run the command for start i.e.
ssh -t -t -q ${AEM_USER}#${SERVERIP} 'bash -l -c "service aem start"'
it does not show any error. On going to the server and checking the status
service aemauthor status
Below message is displayed
aem dead but pid file exists
Also when starting the service by logging in to the server, it works as expected along with the message
Removing stale pidfile (pid: 8701)
Starting aem
We don't know the details of the service script of aem.
I guess the problem is related to the SIGHUP signal. When we log off from a shell or disconnect from ssh, the OS will send HUP signal to all processes that started in this terminated shell. If the process didn't handle the HUP signal, it would exit by default.
When we run a command via ssh remotely, the process started by this command will receive HUP signal after ssh session is terminated.
We can use the nohup command to ignore the HUP signal.
You can try
ssh -t -t -q ${AEM_USER}#${SERVERIP} 'bash -l -c "nohup service aem start"'
If it works, you can use nohup command to start aem in the service script.
As mentioned at the stale pidfile syndrome, there are different reasons for pidfiles getting stalled, like for instance some issues with the way your handles its removal when the process exits... but considering your only experiencing when running remotely, I would guess it might be related to what is being loaded or not by your profile... check the most voted solid answer at the post below for some insights:
Why Does SSH Remote Command Get Fewer Environment Variables
As described in the comments of the mentioned post, you can try sourcing /etc/profile or ~/.bash_profile before executing your script to test it, or even trying to execute env locally and remotelly to compare variables that are being sourced or not.

Resources