How to determine when cmds in docker exec complete - maven

I'm currently using the Docker Rest API to run an exec command on a container to start building a maven project. I was wondering if there was any way provided by Docker to determine when the cmds you attach to the exec cmd complete or if I need to start getting creative

From the exec api spec if you do not set detach to true, the web socket should close once the command finishes. If you do detach from the the exec, then you can use the exec inspect api and poll for Running to go to false.
If you're using the exec api to run an interactive shell and you run your commands as input to the shell without closing stdin, then that shell will hang just as if you ran it outside of docker. You can close stdin to the shell or you'll need to "get creative" if that's not an option.

Related

Create new shell instance within shell script

I have two scripts that I would like to execute.
Script 1: A script that executes docker run (needs to stay active because if it is closed the docker container stops running)
Script 2: A script that runs docker exec and gets into the docker shell
Problem:
I need to run script 1 and script 2 in separate shells because script 1 needs to stay active and cannot be closed.
Separating the scripts into two separate files and then running both scripts from one file
sh script1.sh & sh script2.sh
Just run it as one big script.
When executing/running the docker container use the --detach or -d flag.
This ensures, that the container does not stay active in the terminal but moves into the background (it keeps running!).
The docker command would look something like
docker run -d ...

Could Git Bash run daemon process periodically?

I have this myscript.sh act as a performance monitor in Windows Server. To do so, I'm using Git Bash to run the script but the problem is the script just execute it once after I put the command to run it. Is there any command that I can use to run it in daemon or maybe let the script run periodically based on our time interval?

Can't terminate node(js) process without terminating ssh server in docker container

I'm using a Dockerfile that ends with a CMD ["/start.sh"]:
#!/bin/bash
service ssh start
/usr/bin/node /myApp/app.js
if for some reason i need to kill the node process, the ssh server is being closed as well (forces me to reboot the container to reconnect).
Any simple way to avoid this behavior?
Thank You.
The container exits as soon as main process of the container exits. In your case, the main process inside the container is start.sh shell script. The start.sh shell script is starting the ssh service and then running the nodejs process as child process. Once the nodejs process dies, the shell script exits as well and so the container exits. So what you can do is to put the nodejs process in background.
#!/bin/bash
service ssh start
/usr/bin/node /myApp/app.js &
# Need the following infinite loop as the shell script should not exit
while do:
sleep 2
done
I DO NOT recommend this approach though. You should have only a single process per container. Read the following answers to understand why -
Running multiple applications in one docker container
If you still want to run multiple processes inside container, there are better ways to do it like using supervisord - https://docs.docker.com/config/containers/multi-service_container/

Script invoked from remote server unable to run service correctly

I have a unix script that invokes another script on a remote unix server.
amongst other commands i am stopping a service. The stop command essentially translates to
ssh -t -t -q ${AEM_USER}#${SERVERIP} 'bash -l -c "service aem stop"'
The service is getting stopped but when i start back the service it just creates the .pid file and does not perform the start up. When i run the command for start i.e.
ssh -t -t -q ${AEM_USER}#${SERVERIP} 'bash -l -c "service aem start"'
it does not show any error. On going to the server and checking the status
service aemauthor status
Below message is displayed
aem dead but pid file exists
Also when starting the service by logging in to the server, it works as expected along with the message
Removing stale pidfile (pid: 8701)
Starting aem
We don't know the details of the service script of aem.
I guess the problem is related to the SIGHUP signal. When we log off from a shell or disconnect from ssh, the OS will send HUP signal to all processes that started in this terminated shell. If the process didn't handle the HUP signal, it would exit by default.
When we run a command via ssh remotely, the process started by this command will receive HUP signal after ssh session is terminated.
We can use the nohup command to ignore the HUP signal.
You can try
ssh -t -t -q ${AEM_USER}#${SERVERIP} 'bash -l -c "nohup service aem start"'
If it works, you can use nohup command to start aem in the service script.
As mentioned at the stale pidfile syndrome, there are different reasons for pidfiles getting stalled, like for instance some issues with the way your handles its removal when the process exits... but considering your only experiencing when running remotely, I would guess it might be related to what is being loaded or not by your profile... check the most voted solid answer at the post below for some insights:
Why Does SSH Remote Command Get Fewer Environment Variables
As described in the comments of the mentioned post, you can try sourcing /etc/profile or ~/.bash_profile before executing your script to test it, or even trying to execute env locally and remotelly to compare variables that are being sourced or not.

How to automate startup of a web app with several independent processes?

I run the wesabe web app locally.
Each time I start it by opening separate shells to start the mysql server, java backend and rails frontend.
My question is, how could you automate this with a shell script or rake task?
I tried just listing the commands sequentially in a shell script (see below) but the later commands never run because each app server creates its own process that never 'returns' (until you quit the server).
I've looked into sub-shells and parallel rake tasks, but that's where I got stuck.
echo 'starting mysql'
mysqld_safe
echo 'starting pfc'
cd ~/wesabe/pfc
rails server -p 3001
echo 'starting brcm'
cd ~/wesabe/brcm-accounts-api
script/server
echo 'ok, go!'
open http://localhost:3001
If you don't mind the output being messed, put a "&" at the end of the line where you start the application to make it run in background.

Resources