Create new shell instance within shell script - bash

I have two scripts that I would like to execute.
Script 1: A script that executes docker run (needs to stay active because if it is closed the docker container stops running)
Script 2: A script that runs docker exec and gets into the docker shell
Problem:
I need to run script 1 and script 2 in separate shells because script 1 needs to stay active and cannot be closed.
Separating the scripts into two separate files and then running both scripts from one file
sh script1.sh & sh script2.sh

Just run it as one big script.
When executing/running the docker container use the --detach or -d flag.
This ensures, that the container does not stay active in the terminal but moves into the background (it keeps running!).
The docker command would look something like
docker run -d ...

Related

call a script automatically in container before docker stops the container

I want a custom bash script in the container that is called automatically before the container stops (docker stop or ctrl + c).
According to this docker doc and multiple StackOverflow threads, I need to catch the SIGTERM signal in the container and then run my custom script when the event appears. As I know SIGTERM can be only used from a root process with PID 1.
Relevand part of my Dockerfile:
...
COPY container-scripts/entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
I use [] to define the entrypoint and as I know this will run my script directly, without having a /bin/sh -c wrapper (PID 1), and when the script eventually exec another process, that process becomes the main process and will receive the docker stop signal.
entrypoint.sh:
...
# run the external bash script if it exists
BOOT_SCRIPT="/boot.sh"
if [ -f "$BOOT_SCRIPT" ]; then
printf ">> executing the '%s' script\n" "$BOOT_SCRIPT"
source "$BOOT_SCRIPT"
fi
# start something here
...
The boot.sh is used by child containers to execute something else that the child container wants. Everything is fine, my containers work like a charm.
ps axu in a child container:
PID USER TIME COMMAND
1 root 0:00 {entrypoint.sh} /bin/bash /entrypoint.sh
134 root 0:25 /usr/lib/jvm/java-17-openjdk/bin/java -server -D...
...
421 root 0:00 ps axu
Before stopping the container I need to run some commands automatically so I created a shutdown.sh bash script. This script works fine and does what I need. But I execute the shutdown script manually this way:
$ docker exec -it my-container /bin/bash
# /shutdown.sh
# exit
$ docker container stop my-container
I would like to automate the execution of the shutdown.sh script.
I tried to add the following to the entrypoint.sh but it does not work:
trap "echo 'hello SIGTERM'; source /shutdown.sh; exit" SIGTERM
What is wrong with my code?
Your help and comments guided me in the right direction.
I went through again the official documentations here, here, and here and finally I found what was the problem.
The issue was the following:
My entrypoint.sh script, which kept alive the container executed the following command at the end:
# start the ssh server
ssh-keygen -A
/usr/sbin/sshd -D -e "$#"
The -D option runs the ssh daemon in a NOT detach mode and sshd does not become a daemon. Actually, that was my intention, this is the way how I kept alive the container.
But this foreground process prevented to be executed properly the trap command. I changed the way how I started the sshd app and now it runs as a normal background process.
Then, I added the following command to keep alive my docker container (this is a recommended best practice):
tail -f /dev/null
But of course, the same issue appeared. Tail runs as a foreground process and the trap command does not do its job.
The only way how I can keep alive the container and let the entrypoint.sh runs as a foreign process in docker is the following:
while true; do
sleep 1
done
This way the trap command works fine and my bash function that handles the SIGINT, etc. signals runs properly when the time comes.
But honestly, I do not like this solution. This endless loop with a sleep looks ugly, but I have no idea at the moment how to manage it in a nice way :(
But this is another question that not belongs to this thread (but could be great if you can suggest my a better solution).

Could Git Bash run daemon process periodically?

I have this myscript.sh act as a performance monitor in Windows Server. To do so, I'm using Git Bash to run the script but the problem is the script just execute it once after I put the command to run it. Is there any command that I can use to run it in daemon or maybe let the script run periodically based on our time interval?

Can't terminate node(js) process without terminating ssh server in docker container

I'm using a Dockerfile that ends with a CMD ["/start.sh"]:
#!/bin/bash
service ssh start
/usr/bin/node /myApp/app.js
if for some reason i need to kill the node process, the ssh server is being closed as well (forces me to reboot the container to reconnect).
Any simple way to avoid this behavior?
Thank You.
The container exits as soon as main process of the container exits. In your case, the main process inside the container is start.sh shell script. The start.sh shell script is starting the ssh service and then running the nodejs process as child process. Once the nodejs process dies, the shell script exits as well and so the container exits. So what you can do is to put the nodejs process in background.
#!/bin/bash
service ssh start
/usr/bin/node /myApp/app.js &
# Need the following infinite loop as the shell script should not exit
while do:
sleep 2
done
I DO NOT recommend this approach though. You should have only a single process per container. Read the following answers to understand why -
Running multiple applications in one docker container
If you still want to run multiple processes inside container, there are better ways to do it like using supervisord - https://docs.docker.com/config/containers/multi-service_container/

Start jobs in background from sh file in gitbash

I need to start a couple of processes locally in multiple command-prompt windows, to make it simple, I have written a shell script say abc.sh to run in git-bash which has below commands:
cd "<target_dir1>"
<my_command1> &>> output.log &
cd "<target_dir2>"
<my_command2> &>> output.log &
when I run these commands in git bash I get jobs running in the background, which can be seen using jobs and kill command, however when I run them through abc.sh, I get my processes running in the background, but the git-bash instance disowns them, now I can no longer see them using jobs.
how can I get them run through the abc.sh file and also able to see them in jobs list?

Evoking and detaching bash scripts from current shell

I'm have a tad bit of difficulty with developing bash based deployment scripts for a pipeline I want to run on an OpenStack VM. There are 4 scripts in total:
head_node.sh - launches the vm and attaches appropriate disk storage to the VM. Once that's completed, it runs the scripts (2 and 3) sequentially by passing a command through ssh to the VM.
install.sh - VM-side, installs all of the appropriate software needed by the pipeline.
run.sh - VM-side, mounts storage on the VM and downloads raw data from object storage. It then runs the final script, but does so by detaching the process from the shell created by ssh using nohup ./pipeline.sh &. The reason I want to detach from the shell is that the next portion is largely just compute and may take days to finish. Therefore, the user shouldn't have to keep the shell open that long and it should just run in the background.
pipeline.sh - VM-side, essentially a for loop that iterates through a list of files, and sequential runs commands on those and intermediate files. The result are analysed which are then staged back to the object storage. The VM then essentially tells the head node to kill it.
Now I'm running into a problem with nohup. If I launch the pipeline.sh script normally (i.e. without nohup) and keep it attached to that shell, everything runs smoothly. However, if I detach the script, it errors out after the first command in the first iteration of the for loop. Am I thinking about this the wrong way? What's the correct way to do this?
So this is how it looks:
$./head_node.sh
head_node.sh
#!/bin/bash
... launched VM etc
ssh $vm_ip './install.sh'
ssh $vm_ip './run.sh'
exit 0
install.sh - omitted - not important for the problem
run.sh
#!/bin/bash
... mounts storage downloads appropriate files
nohup ./pipeline.sh > log &
exit 0
pipeline.sh
#!/bin/bash
for f in $(find . -name '*ext')
do
process1 $f
process2 $f
...
done
... stage files to object storage, unmount disks, additional cleanups
ssh $head_node 'nova delete $vm_hash'
exit 0
Since I'm evoking the run.sh script from an ssh instance, subprocesses launched from the script (namely pipeline.sh) will not properly detach from the shell and will error out on termination of the ssh instance evoking run.sh. The pipeline.sh script can be properly detached by calling it from the head node, e.g., nohup ssh $vm_ip './pipeline.sh' &, this will keep the session alive until the end of the pipeline.

Resources