Bash exits after successfully running docker-compose - bash

I can ssh onto a machine and run the following script
echo testing
docker-compose exec -T meteor php artisan down
echo done
which returns
testing
Application is now in maintenance mode.
done
However it I try and run that command over ssh it exits immediately after the docker-compose call.
ssh me#me.com << EOF
echo testing
docker-compose exec -T meteor php artisan down
echo done
EOF
gives
testing
Application is now in maintenance mode.
ie done is missing
I can get it to continue by adding && after the docker-compose command but i've got a long script and it makes it ugly and error prone if I have to explicity state this.
Any idea why this is happening and what I can change to fix it.
Update
I removed the -T from docker-compose and the script ran to completion however it gave the message the input device is not a TTY. It appears it can't allocate the interactive console. After a bit more googling I found that I can call
export COMPOSE_INTERACTIVE_NO_CLI=1
And then it will run to completion without giving error messages.
Thanks all for the help :)

The issue was being caused by the -T flag to docker-compose.
This was added because an error message was being printed if it wasn't there. the input device is not a TTY
I found you could prevent docker-compose from creating an interactive terminal if you use
export COMPOSE_INTERACTIVE_NO_CLI=1
Then the script runs correctly without the -T option.

Related

How can I prevent my terminal from breaking?

I am using a watch command(in a shell script) in my docker image.
Command:
watch -d -t -g ls -la ${DIR_TO_WATCH} && sleep 5 && ${COMMAND} | tee
This command is watching a directory and if there is any change in the directory structure, we perform certain actions.
I am using this docker image in my helm chart.
Now, when I deploy the chart and check the logs of that pod, my terminal breaks and it will not be user friendly anymore.
Command:
kubectl logs -f pod-name -n name-space
After this, we need to reset terminal settings to get the terminal behave normal.
Is there anything that can be done to prevent this?
Best Regards,
Akshat
Solved this by sending output of watch to /dev/null.
watch -d -t -g ls -la ${DIR_TO_WATCH} > /dev/null && sleep 5 && ${COMMAND} | tee
The reason, according to my understanding, behind broken terminal was:
Two different command's logs(logs from watch and ${COMMAND}) were showing up on the same terminal at the same time, which resulted in creating a new terminal over the default one(which I am not sure how), causing the default terminal to break.
While ${COMMAND} logs were crucial for me, I did not need to view or monitor logs from watch. Hence, I sent the log outputs of watch to /dev/null and it solved my problem.
Please correct me if my understanding or approach is wrong.
Thank you.

Exporting ROS Master URI from shell script

I am trying to export ROS_MASTER_URI from a shell script and then launch roscore. In my .sh file I have:
roxterm --tab -e $SHELL -c "cd $CATKIN_WS; $srcdevel; export ROS_MASTER_URI='http://locahost:1234'; roscore -p 1234"
When I do this, however, I get the following error in the roscore tab:
WARNING: ROS_MASTER_URI [http://locahost:1234] host is not set to this machine.
When I echo the ROS_MASTER_URI in this tab, it says that it is localhost:1234, which is correct. When I manually execute these commands, it works correctly and roscore launches without any issues. I am not sure why it does not work when launched from a bash file.
It was just a typo- missed the l in localhost. All working now.

Unable to get any Docker Entrypoint from script working without continuous restarts

I'm having trouble understanding or seeing some working version of using a bash script as an Entrypoint for a Docker container. I've been trying numerous things for about 5 hours now.
Even from this official Docker blog, using a bash-script as an entry-point still doesn't work.
Dockerfile
FROM debian:stretch
COPY docker-entrypoint.sh /usr/local/bin/
RUN ln -s /usr/local/bin/docker-entrypoint.sh / # backwards compat
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["postgres"]
docker-entrypoint.sh
#!/bin/bash
set -e
if [ "$1" = 'postgres' ]; then
chown -R postgres "$PGDATA"
if [ -z "$(ls -A "$PGDATA")" ]; then
gosu postgres initdb
fi
exec gosu postgres "$#"
fi
exec "$#"
build.sh
docker build -t test .
run.sh
docker service create \
--name test \
test
Despite many efforts, I can't seem to get Dockerfile using an Entrypoint as a bash script that doesn't continuously restart and fail repeatedly.
My understanding is that exec "$#" was suppose to keep the container form immediately exiting, but I'm not sure if that's dependent some other process within the script failing.
I've tried using a docker-entrypoint.sh script that simply looked like this:
#!/bin/bash
exec "$#"
And since that also failed, I think that rules out something else going wrong inside the script being the cause of the failure.
What's also frustrating is there are no logs, either from docker service logs test or docker logs [container_id], and I can't seem to find anything useful in docker inspect [container_id].
I'm having trouble understanding everyone's confidence in exec "$#". I don't want to resort to using something like tail -f /dev/null or using a command at docker run. I was hoping that there would be some consistent, reliable way that a docker-entrypoint.sh script could reliably used to start services that I could run with docker run as well for other things for services, but even on Docker's official blog and countless questions here and blogs from other sites, I can't seem get a single example to work.
I would really appreciate some insight into what I'm missing here.
$# is just a string of the command line arguments. You are providing none, so it is executing a null string. That exits and will kill the docker. However, the exec command will always exit the running script - it destroys the current shell and starts a new one, it doesn't keep it running.
What I think you want to do is keep calling this script in kind of a recursive way. To actually have the script call itself, the line would be:
exec $0
$0 is the name of the bash file (or function name, if in a function). In this case it would be the name of your script.
Also, I am curious your desire not to use tail -f /dev/null? Creating a new shell over and over as fast as the script can go is not more performant. I am guessing you want this script to run over and over to just check your if condition.
In that case, a while(1) loop would probably work.
What you show, in principle, should work, and is one of the standard Docker patterns.
The interaction between ENTRYPOINT and CMD is pretty straightforward. If you provide both, then the main container process is whatever ENTRYPOINT (or docker run --entrypoint) specifies, and it is passed CMD (or the command at the end of docker run) as arguments. In this context, ending an entrypoint script with exec "$#" just means "replace me with the CMD as the main container process".
So, the pattern here is
Do some initial setup, like chowning a possibly-external data directory; then
exec "$#" to run whatever was passed as the command.
In your example there are a couple of things worth checking; it won't run as shown.
Whatever you provide as the ENTRYPOINT needs to obey the usual rules for executable commands: if it's a bare command, it must be in $PATH; it must have the executable bit set in its file permissions; if it's a script, its interpreter must also exist; if it's a binary, it must be statically linked or all of its shared library dependencies must be in the image already. For your script you might need to make it executable if it isn't already
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
The other thing with this setup is that (definitionally) if the ENTRYPOINT exits, the whole container exits, and the Bourne shell set -e directive tells the script to exit on any error. In the artifacts in the question, gosu isn't a standard part of the debian base image, so your entrypoint will fail (and your container will exit) trying to run that command. (That won't affect the very simple case though.)
Finally, if you run into trouble running a container under an orchestration system like Docker Swarm or Kubernetes, one of your first steps should be to run the same container, locally, in the foreground: use docker run without the -d option and see what it prints out. For example:
% docker build .
% docker run --rm c5fb7da1c7c1
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"docker-entrypoint.sh\": executable file not found in $PATH": unknown.
ERRO[0000] error waiting for container: context canceled
% chmod +x docker-entrypoint.sh
% docker build .
% docker run --rm f5a239f2758d
/usr/local/bin/docker-entrypoint.sh: line 3: exec: postgres: not found
(Using the Dockerfile and short docker-entrypoint.sh from the question, and using the final image ID from docker build . in those docker run commands.)

Is it possible to view docker-compose logs in the output window running in Windows?

docker-compose on Windows is not able to be run in interactive mode.
ERROR: Interactive mode is not yet supported on Windows.
Please pass the -d flag when using `docker-compose run`.
When running docker-compose in detached mode, little is displayed to the console, and the only logs displayed under docker-compose logs appear to be:
Attaching to
which obviously isn't very useful.
Is there a way of accessing these logs for transient containers?
I've seen that it's possible to change the docker-daemons logging to use a file (without the ability to select the log location). Following this as a solution I could log to the predefined log location, then execute a copy script to move the files to a mounted volume to be persisted before the container is torn down. This doesn't sound ideal.
The solution I've currently gone with (also not ideal) is to wrap the shell script parameter in a dynamically created proxy script which logs all output to the mounted volume.
tempFile=myproxy.sh
echo '#!/bin/bash' > $tempFile
echo 'do.the.thing.sh 2> /data/log.txt'>>$tempFile
echo 'echo finished >> /data/logs/log.txt' >> $tempFile
Which then I'd call
docker-compose run -d doTheThing $tempFile
instead of
docker-compose run -d doTheThing do.the.thing.sh
docker-compose logs doTheThing

sending script over ssh using ruby

I'm attempting to write a bash script in ruby that will start a Resque worker for one of my apps.
The command that I generate from the params given in the console looks like this...
command = "ssh user##{#ip} 'cd /path/to/app; bundle exec rake resque:work QUEUE=#{#queue}&'"
`command`
The command is interpolated correctly and everything looks great. I'm asked to input the password for the ssh command and then nothing happens. I'm pretty sure my syntax is correct for making an ssh connection and running a line of code within that connection. ssh user#host 'execute command'
I've done a simpler command that only runs the mac say terminal command and that worked fine
command = "ssh user##{#ip} 'say #{#queue}'"
`command`
I'm running the rake task in the background because I have used that line once inside ssh and it will only keep the worker alive if you run the process in the background.
Any thoughts? Thanks!
I figured it out.
It was an rvm thing. I need to include . .bash_profile at the beginning of the scripts I wanted to run.
So...
"ssh -f hostname '. .bash_profile && cd /path/to/app && bundle exec rake resque:work QUEUE=queue'" is what I needed to make it work.
Thanks for the help #Casper
Ssh won't exit the session until all processes that were launched by the command argument have finished. It doesn't matter if you run them in the background with &.
To get around this problem just use the -f switch:
-f Requests ssh to go to background just before command execution. This is
useful if ssh is going to ask for passwords or passphrases, but the user
wants it in the background. This implies -n. The recommended way to start
X11 programs at a remote site is with something like ssh -f host xterm.
I.e.
"ssh -f user##{#ip} '... bundle exec rake resque:work QUEUE=#{#queue}'"
EDIT
In fact looking more closely at the problem it seems ssh is just waiting for the remote side to close stdin and stdout. You can test it easily like this:
This hangs:
ssh localhost 'sleep 10 &'
This does not hang:
ssh localhost 'sleep 10 </dev/null >/dev/null &'
So I assume the last version is actually pretty closely equivalent to running with -f.

Resources