Run two executables in docker - bash

I would like my container to launch two processes when it is run;
the generic "this process is runnning to keep the container awake" process (keep_awake.sh), and
node app.js
Is there any way to have both of these launch at the start, based in the dockerfile?
I'm thinking of some sort of abuse of bash, but don't know specifically which one yet.
Further complicating things, keep_awake.sh is in a directory different than app.js.

You should never need an artificial “keep this container alive” process. This is doubly true in the situation you’re describing, where you have a single long-running application process.
Best practice is for a Docker container to run a single process, and run it as a foreground job. If that process ever exits, the container will exit too — and you want this. (It’d be kind of embarrassing for your Node app to die but for you to not notice, because Docker sees that tail -f /dev/null is still up and running.)
In short, end your Dockerfile with
CMD ["node", "app.js"]
and ignore the second do-nothing process.

Related

kubectl exec commands are not recorded in pod's bash history

Is there a way for the kubectl exec commands to be recorded by history command in the pod?
I want to collect all the commands executed on the container by monitoring history command.
Example:
kubectl exec podname -n namespace -c container -- bash -c "ls" ==> Can this be recorded by history command.
A couple of things to clarify in order to get the full context of this behavior:
First, kubectl exec is a neat API-based (Warning: Medium member's story) wrapper for docker exec.
This is essential as it means that it'll use the Kubernetes API to relay your commands to Docker. This implies that whatever behavior, shell-related in this case, is directly linked to how Docker implemented the command execution within containers, and has not much to do with kubectl.
The second thing to have in mind is the command itself: history, which is an shell feature and not a separate binary.
The implementation depends on the shell used, but generally it caches the command history in memory before writing it to the history file after the session is closed. This action can be "forced" using history features but can vary depending on the shell implementation and might not be an uniform, reliable approach while working with the docker exec command.
Considering these things and that your request seems to be aiming to monitor actions performed in your containers, maybe a better approach would be to use Linux audit to record commands executed by users.
Not only this doesn't rely on the aforementioned points, it writes logs to a file and this allows you to use your Kubernetes logging strategy to pick them, exporting them to whatever you're using as log sink, facilitating posterior inspection.

Can't terminate node(js) process without terminating ssh server in docker container

I'm using a Dockerfile that ends with a CMD ["/start.sh"]:
#!/bin/bash
service ssh start
/usr/bin/node /myApp/app.js
if for some reason i need to kill the node process, the ssh server is being closed as well (forces me to reboot the container to reconnect).
Any simple way to avoid this behavior?
Thank You.
The container exits as soon as main process of the container exits. In your case, the main process inside the container is start.sh shell script. The start.sh shell script is starting the ssh service and then running the nodejs process as child process. Once the nodejs process dies, the shell script exits as well and so the container exits. So what you can do is to put the nodejs process in background.
#!/bin/bash
service ssh start
/usr/bin/node /myApp/app.js &
# Need the following infinite loop as the shell script should not exit
while do:
sleep 2
done
I DO NOT recommend this approach though. You should have only a single process per container. Read the following answers to understand why -
Running multiple applications in one docker container
If you still want to run multiple processes inside container, there are better ways to do it like using supervisord - https://docs.docker.com/config/containers/multi-service_container/

Escaping Docker attach one started from bash script

I'm running Docker commands from a management script and one of the commands I use is attach.
Attaching works fine but I can't seam to leave the output either by pressing CTRL+C or CTRL+P/CTRL+Q.
My theory is that the key signals are not fetched by Docker since bash is running front of it.
Is this correct? And how do I solve this?

What purpose does using exec in docker entrypoint scripts serve?

For example in the redis official image:
https://github.com/docker-library/redis/blob/master/2.8/docker-entrypoint.sh
#!/bin/bash
set -e
if [ "$1" = 'redis-server' ]; then
chown -R redis .
exec gosu redis "$#"
fi
exec "$#"
Why not just run the commands as usual without exec preceding them?
As #Peter Lyons says, using exec will replace the parent process, rather than have two processes running.
This is important in Docker for signals to be proxied correctly. For example, if Redis was started without exec, it will not receive a SIGTERM upon docker stop and will not get a chance to shutdown cleanly. In some cases, this can lead to data loss or zombie processes.
If you do start child processes (i.e. don't use exec), the parent process becomes responsible for handling and forwarding signals as appropriate. This is one of the reasons it's best to use supervisord or similar when running multiple processes in a container, as it will forward signals appropriately.
Without exec, the parent shell process survives and waits for the child to exit. With exec, the child process replaces the parent process entirely so when there's nothing for the parent to do after forking the child, I would consider exec slightly more precise/correct/efficient. In the grand scheme of things, I think it's probably safe to classify it as a minor optimization.
without exec
parent shell starts
parent shell forks child
child runs
child exits
parent shell exits
with exec
parent shell starts
parent shell forks child, replaces itself with child
child program runs taking over the shell's process
child exits
Think of it as an optimization like tail recursion.
If running another program is the final act of the shell script, there's not much of a need to have the shell run the program in a new process and wait for it. Using exec, the shell process replaces itself with the program.
In either case, the exit value of the shell script will be identical1. Whatever program originally called the shell script will see an exit value that is equal to the exit value of the exec`ed program (or 127 if the program cannot be found).
1 modulo corner cases such as a program doing something different depending on the name of its parent.

How can I create a process in Bash that has zero overhead but which gives me a process ID

For those of you who know what you're talking about I apologise for butchering the way that I'm going to phrase this question. I know nothing about bash whatsoever. With that caveat out of the way, let me get out my cleaver...
I am building a Rails app which has what's called a procfile which sets up any processes that need to be run in different environments
e.g.
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
redis: redis-server
worker: bundle exec sidekiq
proxylocal: bin/proxylocal_local
Each one of these lines specs a process to be run. It also expects a pid to be returned after the process spins up. The syntax is
process_name: process_invokation_script
However the last process, proxylocal, only actually starts a process in development. In production it doesn't do anything.
Unfortunately that causes the Procfile to choke as it needs a process ID returned. So is there some super-simple, zero-overhead process that I can spawn in that case just to keep the procfile happy?
The sleep command does nothing for a specified period of time, with very low overhead. Give it an argument longer than your code will run.
For example
sleep 2147483647
does nothing for 231-1 seconds, just over 68 years. I picked that number because any reasonable implementation of sleep should be able to handle it.
In the unlikely event that that doesn't work (say if you're on an old 16-bit system that can't sleep for more than 216-1 seconds), you can do a sleep in an infinite loop:
sh -c 'while : ; do sleep 30000 ; done'
This assumes that you need the process to run for a very long time; that depends on what your application needs to do with the process ID. If it's required to be unique as long as the application is running, you need something that will continue to run for a long time; if the process terminates, its PID can be re-used by another process.
If that's not a requirement, you can use sleep 0 or true, which will terminate immediately.
If you need to give the application a little time to get the process ID before the process terminates, something like sleep 10 or even sleep 1 might work, though determining just how long it needs to run can be tricky and error-prone.
If Heroku isn't doing anything with proxylocal I'm not sure why you'd even want this in your Procifle. I'm also a bit confused about whether you want to change the Procfile or what bin/proxylocal_local does and how you would even do that.
That being said, if you are able to do anything you like for production your script can just call cat and it will create a pid and then just sit waiting for the next command (which never comes).
For truly minimal overhead, you don't want to run any external commands. When the shell starts a command, it first forks itself, then the child shell execs the external command. If the forked child can run a builtin, you can skip the exec.
Start by creating a read-only fifo somewhere.
mkfifo foo
chmod 400 foo
Then, whenever you need a do-nothing process, just fork a shell which tries to read from the fifo. It's read-only, so no one can write to it, so all reads will block.
read < foo &

Resources