A clarification on the command syntax in Kubernetes manifest files - shell

I'm working on a question that wants me to deploy a pod with the nginx image. The pod should sleep for 5000 seconds.
The command can be specified like so:
command: ["sleep", "5000"]
or like so:
command:
- sleep
- "5000"
Why can't the command be specified like so:
command:
- sh
- -c
- sleep "5000"
or like so:
command:
- sleep "5000"
In other words, I'm confused about two things:
What does sh -c do? I understand that -c is there to denote arguments, but isn't the sleep command run using sh?
When can the command and args be listed on the same line, and when do they have to be on separate lines? In this example, why doesn't sleep "5000" work? Why do sleep and "5000" have to be on separate lines? Also, why are quotes around the number 5000 required?

Note that command and args as in K8s object definitions or manifest are in-facet entrypoint and cmd fields as found in container image definitions. These are supposed to behave in a certain specific way. for eg: if you look at at how docker images are defined, you would find entrypoint and cmd as 2 highly used fields.
supplying command and/or args in K8s object definition overrides entrypoint and cmd fields of the associated container.
entrypoint in docker image definitions for example is allowed either as a single string (sleep 5000) or as a broken down array (["sleep", "500"]). either ways, it's eventually broken down into an array of arguments (i.e. sleep 5000 becomes ["sleep", "5000"]).
I suppose K8s tries to simplify this by letting you supply this only as an array.
this article is a good reference to understand how entrypoint and cmd work in combination on container image definitions. The behavior feels a bit unnecessarily complicated and I cannot thank K8s contributors more for they simplified it to some extent at least.

The two ways of running the command:
command: ["sleep", "5000"]
Is exactly the same as:
command:
- sleep
- 5000
In both cases, it is interpreted as a list. It's two ways of expressing a list.
The command cannot be specified as command: sleep "5000" because this would be interpreted as a single argument "sleep 5000" rather than as two separate arguments sleep and 5000.
Think of it as running this command in shell:
`"sleep 5000"`
This would be run as a single command and the 5000 would not be interpreted as an argument. Whereas the expression: command: [sleep, 5000] would be interpreted as:
`"sleep"` 5000
# or simply... (same as)
`sleep` "5000"
Thus being interpreted correctly as an argument of sleep.
For sh -c, sh calls the program sh (shell) as the interpreter and the -c flag means to execute the following command as interpreted by this program. -c is the flag that tells the shell to execute the command that follows it. So in this scenario, it would be redundant to use sh -c, and I'm uncertain if that would even execute correctly.

Related

Loop trough docker output until I find a String in bash

I am quite new to bash (barely any experience at all) and I need some help with a bash script.
I am using docker-compose to create multiple containers - for this example let's say 2 containers. The 2nd container will execute a bash command, but before that, I need to check that the 1st container is operational and fully configured. Instead of using a sleep command I want to create a bash script that will be located in the 2nd container and once executed do the following:
Execute a command and log the console output in a file
Read that file and check if a String is present. The command that I will execute in the previous step will take a few seconds (5 - 10) seconds to complete and I need to read the file after it has finished executing. I suppose i can add sleep to make sure the command is finished executing or is there a better way to do this?
If the string is not present I want to execute the same command again until I find the String I am looking for
Once I find the string I am looking for I want to exit the loop and execute a different command
I found out how to do this in Java, but if I need to do this in a bash script.
The docker-containers have alpine as an operating system, but I updated the Dockerfile to install bash.
I tried this solution, but it does not work.
#!/bin/bash
[command to be executed] > allout.txt 2>&1
until
tail -n 0 -F /path/to/file | \
while read LINE
do
if echo "$LINE" | grep -q $string
then
echo -e "$string found in the console output"
fi
done
do
echo "String is not present. Executing command again"
sleep 5
[command to be executed] > allout.txt 2>&1
done
echo -e "String is found"
In your docker-compose file make use of depends_on option.
depends_on will take care of startup and shutdown sequence of your multiple containers.
But it does not check whether a container is ready before moving to another container startup. To handle this scenario check this out.
As described in this link,
You can use tools such as wait-for-it, dockerize, or sh-compatible wait-for. These are small wrapper scripts which you can include in your application’s image to poll a given host and port until it’s accepting TCP connections.
OR
Alternatively, write your own wrapper script to perform a more application-specific health check.
In case you don't want to make use of above tools then check this out. Here they use a combination of HEALTHCHECK and service_healthy condition as shown here. For complete example check this.
Just:
while :; do
# 1. Execute a command and log the console output in a file
command > output.log
# TODO: handle errors, etc.
# 2. Read that file and check if a String is present.
if grep -q "searched_string" output.log; then
# Once I find the string I am looking for I want to exit the loop
break;
fi
# 3. If the string is not present I want to execute the same command again until I find the String I am looking for
# add ex. sleep 0.1 for the loop to delay a little bit, not to use 100% cpu
done
# ...and execute a different command
different_command
You can timeout a command with timeout.
Notes:
colon is a utility that returns a zero exit status, much like true, I prefer while : instead of while true, they mean the same.
The code presented should work in any posix shell.

How to escape and store shell command line arguments into one argument?

Inb4 anyone saying this is a bad idea, it's actually a reasonable approach for things like this.
I'm writing this Docker container that can execute user provided commands through contained OpenVPN connection in Docker, e.g. docker run vpntunnel curl example.com.
So the ENTRYPOINT of the image will fire up OpenVPN, after the VPN tunnel is up, execute the user provided CMD line.
Problem is, the standard way to run commands after OpenVPN is up is through the --up option of OpenVPN. Here is the man page description of this option:
--up cmd
Run command cmd after successful TUN/TAP device open (pre --user UID change).
cmd consists of a path to script (or executable program), optionally followed
by arguments. The path and arguments may be single- or double-quoted and/or
escaped using a backslash, and should be separated by one or more spaces.
So the reasonable approach here is for ENTRYPOINT script to correctly escape the user provided CMD line and pass the whole thing as one parameter to --up option of OpenVPN.
In case my Docker image needs to perform some initializations after the tunnel is up and before the user command line is executed, I can prepend a script before the user provided CMD line like this: --up 'tunnel-up.sh CMD...' and in the last line of tunnel-up.sh use "$#" to execute user provided arguments.
Now as you may guess, the only problem left is how to correctly escape an entire command line to be able to passed as a single argument.
The naive approach is just --up "tunnel-up.sh $#" but it surely can't distinguish command lines between a b c and "a b" c.
In bash 4.4+ you can use parameter transformation with # to quote values:
--up "tunnel-up.sh ${*#Q}"
In prior versions you could use printf '%q' to achieve the same effect:
--up "tunnel-up.sh $((($#)) && printf '%q ' "$#")"
(The (($#)) check makes sure there are parameters to print before calling printf.)

Docker CMD evaluation with ENTRYPOINT

I have a Dockerfile that falls into the exec_entry p1_entry /bin/sh -c exec_cmd p1_cmd category in the matrix found in Understand how CMD and ENTRYPOINT interact. The behavior is not what I'm expecting. I was expecting /bin/sh -c exec_cmd p1_cmd to evaluate first then get passed into exec_entry p1_entry. What I am observing is that /bin/sh -c exec_cmd p1_cmd literally gets passed into exec_entry p1_entry which I think is funny.
To provide more context, I am specifically deriving a new Dockerfile from an existing Dockerfile where the parent has:
ENTRYPOINT ["/bin/registrator"]
I want to pass in specific command-line parameters from my Dockerfile:
FROM gliderlabs/registrator:v7
CMD echo "-ip=$EXTERNAL_IP consul://$CONSUL_HOST"
When I run my Docker image in a container:
$ docker run --rm --name=test-registrator --volume=/var/run/docker.sock:/tmp/docker.sock -e "EXTERNAL_IP=<some-ip>" -e "CONSUL_HOST=<some-consul-hostname>:8500" my/registrator
I get the following error:
2016/12/28 19:20:46 Starting registrator v7 ...
Extra unparsed arguments:
-c echo "-ip=$EXTERNAL_IP consul://$CONSUL_HOST"
Options should come before the registry URI argument.
Usage of /bin/registrator:
/bin/registrator [options] <registry URI>
-cleanup=false: Remove dangling services
-deregister="always": Deregister exited services "always" or "on-success"
-internal=false: Use internal ports instead of published ones
-ip="": IP for ports mapped to the host
-resync=0: Frequency with which services are resynchronized
-retry-attempts=0: Max retry attempts to establish a connection with the backend. Use -1 for infinite retries
-retry-interval=2000: Interval (in millisecond) between retry-attempts.
-tags="": Append tags for all registered services
-ttl=0: TTL for services (default is no expiry)
-ttl-refresh=0: Frequency with which service TTLs are refreshed
This means that -c echo "-ip=$EXTERNAL_IP consul://$CONSUL_HOST" is literally getting passed into /bin/registrator as the parameter.
Am I doing something wrong or is this a limitation of the use case where /bin/sh -c exec_cmd p1_cmd does not get evaulated first before getting passed into the ENTRYPOINT? If the latter is true, then can you also explain the usefulness of this use case?
Yes. This is exactly how it is supposed to work.
The value of CMD is just passed as a parameter to the ENTRYPOINT.
Main difference between CMD and ENTRYPOINT is that CMD just provides default command argument to the ENTRYPOINT program and it's usually overridden by the arguments to run command. On the other hand you have to redefine ENTRYPOINT explicitly with --entrypoint option if you want different command to be executed.
Also, notice there is a difference how things are executed depending on the way ENTRYPOINT and CMD are defined in the Dockerfile. When they are defined as an array in the form of ['arg1', 'arg2'] this array is passed as is to the ENTRYPOINT command, with the first element of ENTRYPOINT being the program being executed. In the other case when they are defined as a simple string like arg1 arg2 this string is first passed is prepended by "/bin/sh -c" note that Docker does not execute /bin/sh and returns the result of evaluation back, that string itself is passed to the ENTRYPOINT program.
So in your case you have to use array method of passing the arguments:
CMD [-ip, $EXTERNAL_IP, consul://$CONSUL_HOST]

how do I get etcd values into my systemd service on coreOS?

I have two services A and B.
A sets a value in etcd as it's being started, say the public IP address which it gets from an environment file:
ExecStartPost=/usr/bin/etcdctl set /A_ADDR $COREOS_PUBLIC_IPV4
B needs that value as it starts up, as well as its own IP address. So something like this would be nice:
ExecStart=/usr/bin/docker run -e MY_ADDR=$COREOS_PUBLIC_IPV4 -e A_ADDR=$ETCD_A_ADDR mikedewar/B
but that's obviously not possible as etcd variables don't present as systemd environment variables like that. Instead I can do some sort of /usr/bin/bash -c 'run stuff' in my ExecStart but it's awkward especially as I need systemd to expand $COREOS_PUBLIC_IPV4 and my new bash shell to expand $(etcdctl get /A_ADDR). It also reeks of code smell and makes me think I'm missing something important.
Can someone tell me the "right" way of getting values from etcd into my ExecStart declaration?
-- update
So I'm up and running with
ExecStart=/usr/bin/bash -c 'source /etc/environment && /usr/bin/docker run -e A_ADDR=$(/usr/bin/etcdctl get /A_ADDR) -e MY_ADDR=$COREOS_PUBLIC_IPV4 mikedewar/B'
but it's pretty ugly. Still can't believe I'm not missing something..
I've was struggling with the same thing until recently. After reading much of the documentation of CoreOS and systemd, here is a slightly 'cleaner' version of what you're doing:
[Service]
EnvironmentFile=/etc/environment
ExecStart=/bin/sh -c '/usr/bin/docker run -e A_ADDR=$(/usr/bin/etcdctl get /A_ADDR) -e MY_ADDR=$COREOS_PUBLIC_IPV4 mikedewar/B'
Additionally, I have adopted a pattern where my services depend on a systemd 'oneshot' service that will compute some value and write it in to /etc/environment. This allows you to keep more complex shell scripting out of the main service unit and place it into it's own oneshot service unit.
Here are the docs for EnvironmentFile: http://www.freedesktop.org/software/systemd/man/systemd.exec.html#EnvironmentFile=
Finally, a quick gotchya: you must use a shell invocation if you use any variable in your ExecStart/Stop commands. systemd does no shell invocation when executing the command you provide, so variables will not be expanded.
I am currently using such a workaround:
I've created scripts which extracts data from particular etcd directory
#! /bin/sh
for entry in `etcdctl ls /my_dir --recursive` ; do
echo ' -e '`grep -o '[^/]*$' <<< ${entry}`=`etcdctl get ${entry}`
done
its output looks following:
-e DATABASE_URL=postgres://m:m#mi.cf.us-.rds.amazonaws.com:5432/m
-e WEB_CONCURRENCY=4
So then eventually I can in my init file place that in such way
/bin/sh -c '/usr/bin/docker run -p 9000:9000 $(/home/core/envs.sh) me/myapp -D FOREGROUND'
It's not the most elegant way, and I'd love to know how to improve it, but placing that for loop as a one-liner requires lots of escaping.
Can you container read directly from etcd as it starts, over the docker0 bridge IP, instead of passing in the values? This will also allow you to do more complex logic on the response, parse JSON if you are storing it as the etcd value, etc.

Running a delayed command with 'sudo'

I want run a Bash script as root, but delayed. How can I achieve this?
sudo "sleep 3600; command" , or
sudo (sleep 3600; command)
does not work.
You can use at:
sudo at next hour
And then you have to enter the command and close the file with Ctrl+D. Alternatively you can specify commands to be run in a file:
sudo at -f commands next hour
If you really must avoid using cron:
sudo sh -c "(sleep 3600; command)&"
The simplest answer is:
sudo bash -c 'sleep 3600; command' &
Because sleep is a shell command and not an executable, and the semicolon is a shell “operator” too, it is a shell script, and hence needs to run in a shell. bash -c tells sudo to run bash and pass it a script to execute as a string.
Of course this will “hang” until command has actually finished running, or be killed if you exit the surrounding shell. I haven’t found a simple way to use nohup to prevent that here, and at that point, you’re basically reimplementing the at command anyway. I have found the above solution useful in many simple cases though. ;)
For anything more complex… of course you can always make a real shell script file, with a shebang (#! …) at the start, and run that. But I assume the whole point is that you wanted to avoid this for something that simple.
You could theoretically pass a string as a file using Bash’s … <( … ) syntax, but sudo expects it to be a real file, and marked as executable too, so that won’t work.
Use:
sleep 3600; sudo <command>
Anyway, I would consider using cron in your case…

Resources