Need input with passing commands in Kubernetes containers - shell

What is the difference in below three declarations for passing command/arguments:
containers:
name: busybox
image: busybox
args:
-sleep
-"1000"
containers:
name: busybox
image: busybox
command: ["/bin/sh", "-c", "sleep 1000"]
containers:
name: busybox
image: busybox
args:
-sleep
-"1000"
A. Would these produce same result?
B. What is the preference or usage for each?

The YAML list definition are only a matter of taste, it's just a YAML syntax. This two examples are equivalent:
listOne:
- item1
- item2
listTwo: ['item1', 'item2']
And this syntax works for both args and command. Beside that args and command are slight different, as the documentation says:
If you do not supply command or args for a Container, the defaults
defined in the Docker image are used
If you supply a command but no args for a Container, only the supplied command is used. The default EntryPoint and the default Cmd defined in the Docker image are ignored.
If you supply only args for a Container, the default Entrypoint defined in the Docker image is run with the args that you supplied.
If you supply a command and args, the default Entrypoint and the default Cmd defined in the Docker image are ignored. Your command is run with your args.
Imagine a container like mysql, if you look it's Dockerfile you'll notice this:
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["mysqld"]
The entrypoint call a script that prepare everything the database needs, when finish, this script calls exec "$#" and the shell variable $# are everything defined in cmd.
So, on Kubernetes, if you want to pass arguments to mysqld you do something like:
image: mysql
args:
- mysqld
- --skip-grant-tables
# or args: ["mysqld", "--skip-grant-tables"]
This still executes the entrypoint but now, the value of $# is mysqld --skip-grant-tables.

Related

ERROR: unrecognised command `sh`, or `python`, or `bash` while all can be executed inside the image

I want to run a script after a docker image has been initialized. The image in question is a node:16 with python and other stuff
https://github.com/Flagsmith/flagsmith/blob/main/Dockerfile
Anyway, if I run the image without commands or entry-point it does start successfully. If I login using docker exec -it ###### /bin/bas I can then run either sh, bash or even python
However having:
flagsmith:
image: flagsmith/flagsmith:latest
environment:
# skipping for readibility
ports:
- "9000:8000"
depends_on:
- flotto-postgres
links:
- flotto-postgres
volumes: ['./init_flagsmith.py:/init_flagsmith.py', './init_flagsmith.sh:/init_flagsmith.sh']
command: /bin/bash '/init_flagsmith.sh' # <-------- THIS GUY IS NOT WORKING
it does not run, and the returned error is 0 with this message (depending on the tool I run on init_flagsmith.sh :
ERROR: unrecognised command '/bin/bash'
If you look at the end of the Dockerfile you link to, it specifies
ENTRYPOINT ["./scripts/run-docker.sh"]
CMD ["migrate-and-serve"]
In the Compose file, the command: overrides the Dockerfile CMD, but it still is passed as arguments to the ENTRYPOINT. Looking at the run-docker.sh script, it does not accept a normal shell command as its arguments, but rather one of a specific set of command keywords (migrate, serve, ...).
You could in principle work around this by replacing command: with entrypoint: in your Compose file. However, you'll still run into the problem that a container only runs one process, and so your setup script runs instead of the normal container process.
What you might do instead is set up your initialization script to run the main entrypoint script when it finishes.
#!/bin/sh
# init_flagsmith.sh
...
# at the very end
exec ./scripts/run-docker.sh "$#"
I also might package this up into an image, rather than injecting the files using volumes:. You can create an image FROM any base image you want to extend it.
# Dockerfile
FROM flagsmith/flagsmith:latest
COPY init_flagsmith.sh init_flagsmith.py ./
ENTRYPOINT ["./init_flagsmith.sh"] # must be JSON-array syntax
CMD ["migrate-and-serve"] # must repeat from the base image
# if changing ENTRYPOINT
Then you can remove these options from the Compose setup (along with the obsolete links:)
flagsmith:
build: .
environment:
# skipping for readibility
ports:
- "9000:8000"
depends_on:
- flotto-postgres
# but no volumes:, command:, or entrypoint:

Kubernetes: using bash variable expansion in container entrypoint

According to the [documentation][1] Kubernetes variables are expanded using the previous defined environment variables in the container using the syntax $(VAR_NAME). The variable can be used in the container's entrypoint.
For example:
env:
- name: MESSAGE
value: "hello world"
command: ["/bin/echo"]
args: ["$(MESSAGE)"]
Is this possible though to use bash expansion aka ${Var1:-${Var2}} inside the container's entrypoint for the kubernetes environment variables E.g.
env:
- name: Var1
value: "hello world"
- name: Var2
value: "no hello"
command: ['bash', '-c', "echo ${Var1:-$Var2}"]
Is this possible though to use bash expansion aka ${Var1:-${Var2}} inside the container's entrypoint ?
Yes, by using
command:
- /bin/bash
- "-c"
- "echo ${Var1:-${Var2}}"
but not otherwise -- kubernetes is not a wrapper for bash, it use the Linux exec system call to launch programs inside the container, and so the only way to get bash behavior is to launch bash
That's also why they chose $() syntax for their environment interpolation so it would be different from the ${} style that a shell would use, although this question comes up so much that one might wish they had not gone with $ anything to avoid further confusing folks

Send variables into docker container to use in a script

I am running a script in the CI/CD of the pipeline. The goal is to get a string to work with.
When I get that result, I save it into a variable and save result in the yaml file of the dockerfile.
I am wanting to pass that variable from the CI environment, into the docker-compose container. So, I am trying to export this like another things are exported, however it doesn't work:
ci/pdf/jenkins-changes.sh
LOG="$(cat ".log")"
export LOG
I have added a variables.env file that looks like this:
LOG=LOG
And then modified the docker-compose.yaml to read the var :
pdf:
image: thisimage/this
build:
context: ../
dockerfile: ./docker/Dockerfile.name
args:
git_branch: ${GIT_BRANCH}
env_file:
- variables.env
environment:
- LOG=${LOG}
volumes:
- do-build:/src/do-build
And in the script that finally runs the docker-container, I have also
Declared it:
FROM ubuntu:16.04 as pdf-builder
ARG log
ENV log=${log}
RUN LOG=${log}
RUN export $LOG
And right after, I run the script.sh that requires the variable, however, it returns Unbound variable and breaks.
LOG=${log}
echo ${LOG}
The answer to this question was:
ci/pdf/jenkins-changes.sh
LOG="$(cat ".log")"
export LOG
Then pass it as an argument, instead of a variable:
pdf:
image: thisimage/this
build:
context: ../
dockerfile: ./docker/Dockerfile.name
args:
git_branch: ${GIT_BRANCH}
env_file:
- variables.env
environment:
- LOG=${LOG}
volumes:
- do-build:/src/do-build
And then, in the dockerfile call it and define it.
ARG log
This should leave it global for any script to use it.

Docker compose won't find $PWD environment variable

Here's my docker-compose:
version: '2'
services:
couchpotato:
build:
context: ./couchpotato
dockerfile: Dockerfile
ports:
- 5050:5050
volumes:
- "${PWD}/couchpotato/data:/home/CouchPotato/data/"
- "${PWD}/couchpotato/config:/home/CouchPotato/config/"
When I run it inside the shell, in the directory of the docker-compose.yml, I get:
WARNING: The PWD variable is not set. Defaulting to a blank string.
and the compose starts with PWD being empty.
I don't see any error in the file, as seen here: https://docs.docker.com/compose/environment-variables/
You don't need ${PWD} for this, you can just make the path relative and compose will expand it (one major difference between compose paths and those processed by docker run).
version: '2'
services:
couchpotato:
build:
context: ./couchpotato
dockerfile: Dockerfile
ports:
- 5050:5050
volumes:
- "./couchpotato/data:/home/CouchPotato/data/"
- "./couchpotato/config:/home/CouchPotato/config/"
As for why compose doesn't see this variable, that depends on your shell. Compose looks for an exported environment variable, contents of the .env file, and command line flags to the docker-compose command. If each of those comes up empty for the variable, you'll get that warning.
My advice: change all $PWD to .
$PWD will not work if you are running using sudo. Try the recommended settings from Docker for Linux https://docs.docker.com/engine/install/linux-postinstall/.
Sudo will run as a different user, with a different env.
$ sudo env | grep -i pwd
$ env | grep -i pwd
PWD=/home/user
OLDPWD=/
If you really need absolute paths, then call this before calling docker-compose up:
set PWD=%CD%
I had the same issue with one of my env vars. On looking at my bashrc file more closely, I found out that I hadn't exported that variable.
Before:
VAR=<value>
After:
export VAR=<value>

Interactive shell using Docker Compose

Is there any way to start an interactive shell in a container using Docker Compose only? I've tried something like this, in my docker-compose.yml:
myapp:
image: alpine:latest
entrypoint: /bin/sh
When I start this container using docker-compose up it's exited immediately. Are there any flags I can add to the entrypoint command, or as an additional option to myapp, to start an interactive shell?
I know there are native docker command options to achieve this, just curious if it's possible using only Docker Compose, too.
You need to include the following lines in your docker-compose.yml:
version: "3"
services:
app:
image: app:1.2.3
stdin_open: true # docker run -i
tty: true # docker run -t
The first corresponds to -i in docker run and the second to -t.
The canonical way to get an interactive shell with docker-compose is to use:
docker-compose run --rm myapp
(With the service name myapp taken from your example. More general: it must be an existing service name in your docker-compose file, myapp is not just a command of your choice. Example: bash instead of myapp would not work here.)
You can set stdin_open: true, tty: true, however that won't actually give you a proper shell with up, because logs are being streamed from all the containers.
You can also use
docker exec -ti <container name> /bin/bash
to get a shell on a running container.
The official getting started example (https://docs.docker.com/compose/gettingstarted/) uses the following docker-compose.yml:
version: "3.9"
services:
web:
build: .
ports:
- "8000:5000"
redis:
image: "redis:alpine"
After you start this with docker-compose up, you can shell into either your redis container or your web container with:
docker-compose exec redis sh
docker-compose exec web sh
docker-compose run myapp sh should do the deal.
There is some confusion with up/run, but docker-compose run docs have great explanation: https://docs.docker.com/compose/reference/run
If anyone from the future also wanders up here:
docker-compose exec service_name sh
or
docker-compose exec service_name bash
or you can run single lines like
docker-compose exec service_name php -v
That is after you already have your containers up and running.
The service_name is defined in your docker-compose.yml file
Using docker-compose, I found the easiest way to do this is to do a docker ps -a (after starting my containers with docker-compose up) and get the ID of the container I want to have an interactive shell in (let's call it xyz123).
Then it's a simple matter to execute
docker exec -ti xyz123 /bin/bash
and voila, an interactive shell.
This question is very interesting for me because I have problems, when I run container after execution finishes immediately exit and I fixed with -it:
docker run -it -p 3000:3000 -v /app/node_modules -v $(pwd):/app <your_container_id>
And when I must automate it with docker compose:
version: '3'
services:
frontend:
stdin_open: true
tty: true
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
This makes the trick: stdin_open: true, tty: true
This is a project generated with create-react-app
Dockerfile.dev it looks this that:
FROM node:alpine
WORKDIR '/app'
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "start"]
Hope this example will help other to run a frontend(react in example) into docker container.
I prefer
docker-compose exec my_container_name bash
If the yml is called docker-compose.yml it can be launched with a simple $ docker-compose up. The corresponding attachment of a terminal can be simply (consider that the yml has specified a service called myservice):
$ docker-compose exec myservice sh
However, if you are using a different yml file name, such as docker-compose-mycompose.yml, it should be launched using $ docker-compose -f docker-compose-mycompose.yml up. To attach an interactive terminal you have to specify the yml file too, just like:
$ docker-compose -f docker-compose-mycompose.yml exec myservice sh
A addition to this old question, as I only had the case last time. The difference between sh and bash. So it can happen that for some bash doesn't work and only sh does.
So you can:
docker-compose exec CONTAINER_NAME sh
and in most cases: docker-compose exec CONTAINER_NAME bash
use.
If you have time. The difference between sh and bash is well explained here:
https://www.baeldung.com/linux/sh-vs-bash
You can do docker-compose exec SERVICE_NAME sh on the command line. The SERVICE_NAME is defined in your docker-compose.yml. For example,
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
The SERVICE_NAME would be "zookeeper".
According to documentation -> https://docs.docker.com/compose/reference/run/
You can use this docker-compose run --rm app bash
[app] is the name of your service in docker-compose.yml

Resources