Bash selection menu based on command output - bash

How can I make a selection menu based on other command output?
For example running following command:
kubectl config get-contexts
I am getting following output:
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* Name1 Cluster1 Auth1
Name2 Cluster2 Auth3
Name3 Cluster3 Auth3
What I'd like to achieve is to print NAME and CLUSTER columns as a menu options and if any is selected pass it as variable to another command:
kubectl config use context $NameX
But have no idea how to do this with command output.

In order to use the output of a command as input to another one, you might use backticks or, more readable, $() (but the latter does not always work).
For example, give some more information on the ls "program":
which ls
file /bin/ls
This can be shortened to:
file $(which ls)
or:
file `which ls`
In your example, it will be something like:
kubectl config use context $(kubectl config get-contexts ...), or:
kubectl config use context `kubectl config get-contexts ...`
Obviously you'll need to specify exactly which information you need from the config in order for this to work.

Related

settings environment in makefile

I am trying to export an env variable in make to use in on the next lines. I am doing as suggested here Setting environment variables in a makefile.
eks-apps:
export KUBECONFIG=$(CURDIR)/terraform/kubernetes/cluster/$(shell ls terraform/kubernetes/cluster/ | grep kubeconfig)
kubectl get all
But its not using that kubeconfig in the kubectl command. What am I missing?
Every line in a recipe will be executed in a new shell. As this you have to use a single shell for all your commands:
eks-apps:
( \
export KUBECONFIG=$(CURDIR)/terraform/kubernetes/cluster/$(shell ls terraform/kubernetes/cluster/ | grep kubeconfig); \
kubectl get all \
)
From the answer you are linked to:
Please note: this implies that setting shell variables and invoking shell commands such as cd that set a context local to each process will not affect the following lines in the recipe.2 If you want to use cd to affect the next statement, put both statements in a single recipe line

Scriptable args in docker-compose file

In my docker-compose file (docker-compose.yaml), I would like to set an argument based on a small shell script like this:
services:
backend:
[...]
build:
[...]
args:
PKG_NAME: $(dpkg -l <my_package>)
In the Dockerfile, I read this argument like this:
ARG PKG_NAME
First of all: I know that this approach is OS-dependent (requires dpkg), but for starters I would be happy to make it run on Debian. Also, it's fine it the value is an empty string.
However, docker-compose up throws this error:
ERROR: Invalid interpolation format for "build" option in service "backend": "$(dpkg -l <my_package>)"
Is there a way to dynamically specify an argument in the docker-compose file through a shell script (or another way)?
You can only use variable substitution as described in compose file documentation
You are trying to inject a shell construct and this is not supported.
The documentation has several examples on how to pass vars to compose file. In your case, you could:
export the var in your environment:
export MY_PACKAGE=$(dpkg -l <my_package>)
use that var in your compose file with default:
args:
PKG_NAME: "${MY_PACKAGE:-some_default_pkg}"

Use Kubectl Apply command using k8s.io package

I need to add kubectl apply functionality to my application.
I've looked through kubectl go-client, it has no provisions for the apply command.
Can I create an instance of kubectl in my go-application?
If not 1, can I use the k8s.io/kubernetes package to emulate an kubectl apply command?
Questions and clarifications if needed, will be given.
Can I create an instance of kubectl in my application?
You can wrap the kubectl command in your application and start it in a new child-process, like you would do via a shell-script. See the exec package in go for more information: https://golang.org/pkg/os/exec/
This works pretty good for us and kubectl usually has the -o-Parameter that lets you control the output format, so you get machine readable text back.
There are already some open-source projects that use this approach:
https://github.com/box/kube-applier
https://github.com/autoapply/autoapply
If not 1, can I use the k8s.io/kubernetes package to emulate an kubectl apply command?
Have you found https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/apply/apply.go while searching in kubectl-source code?
Take a look at the run-function:
func (o *ApplyOptions) Run() error {
...
r := o.Builder.
Unstructured().
Schema(o.Validator).
ContinueOnError().
NamespaceParam(o.Namespace).DefaultNamespace().
FilenameParam(o.EnforceNamespace, &o.DeleteOptions.FilenameOptions).
LabelSelectorParam(o.Selector).
IncludeUninitialized(o.ShouldIncludeUninitialized).
Flatten().
Do()
...
err = r.Visit(func(info *resource.Info, err error) error {
...
It is not very good readable it guess but this is what kubectl apply does.
Maybe one possible way of whould be to debug the code and see what is does further more.
This can be done by created and adding a plugin to kubectl.
You can write a plugin in any programming language or script that allows you to write command-line commands.
There is no plugin installation or pre-loading required. Plugin executables receive the inherited environment from the kubectl binary. A plugin determines which command path it wishes to implement based on its name. For example, a plugin wanting to provide a new command kubectl foo, would simply be named kubectl-foo, and live somewhere in the user’s PATH.
Example plugin can look as follows:
#!/bin/bash
# optional argument handling
if [[ "$1" == "version" ]]
then
echo "1.0.0"
exit 0
fi
# optional argument handling
if [[ "$1" == "config" ]]
then
echo $KUBECONFIG
exit 0
fi
echo "I am a plugin named kubectl-foo"
After that you just make it executable chmod +x ./kubectl-foo and move it for in your path mv ./kubectl-foo /usr/local/bin.
Now you should be able to call it by kubectl foo:
$ kubectl foo
I am a plugin named kubectl-foo
All args and flags are passed as-is to the executable:
$ kubectl foo version
1.0.0
You can read more about the kubectl plugins inside Kubernetes Extend kubectl with plugins documentation.
kubectl is supposed to be used from command line only
but you can wraped it around inside the code using some form of exec such os.system in python , similar will exist in golang import "os/exec", but this approach is dirty
you need to use the client libary in your code to carry out the operations
list of client libraries is here

shortcut for typing kubectl --all-namespaces everytime

Is there any alias we can make for all-namespace as kubectl don't recognise the command kubectl --all-namespaces or any kind of shortcut to minimize the typing of the whole command.
New in kubectl v1.14, you can use -A instead of --all-namespaces, eg:
kubectl get -A pod
(rejoice)
Is there any alias we can make for all-namespace
Based on this excellent SO answer you can create alias that inserts arguments between prefix and suffix like so:
alias kca='f(){ kubectl "$#" --all-namespaces -o wide; unset -f f; }; f'
and then use it regularly like so:
kca get nodes
kca get pods
kca get svc,sts,deploy,pvc,pv
etc..
Note: There is -o wide added for fun as well to get more detailed info about resources not normally namespaced like nodes and pv...

Dockerfile: how to set env variable from file contents

I want to set an environment variable in my Dockerfile.
I've got a .env file that looks like this:
FOO=bar.
Inside my Dockerfile, I've got a command that parses the contents of that file and assigns it to FOO.
RUN 'export FOO=$(echo "$(cut -d'=' -f2 <<< $(grep FOO .env))")'
The problem I'm running into is that the script above doesn't return what I need it to. In fact, it doesn't return anything.
When I run docker-compose up --build, it fails with this error.
The command '/bin/sh -c 'export FOO=$(echo "$(cut -d'=' -f2 <<< $(grep FOO .env))")'' returned a non-zero code: 127
I know that the command /bin/sh -c 'echo "$(cut -d'=' -f2 <<< $(grep FOO .env))"' will generate the correct output, but I can't figure out how to assign that output to an environment variable.
Any suggestions on what I'm doing wrong?
Environment Variables
If you want to set a number of environment variables into your docker image (to be used within the containers) you can simply use env_file configuration option in your docker-compose.yml file. With that option, all the entries in the .env file will be set as the environment variables in image and hence into containers.
More Info about env_file
Build ARGS
If your requirement is to use some variables only within your Dockerfile then you specify them as below
ARG FOO
ARG FOO1
ARG FOO2
etc...
And you have to specify these arguments under the build key in your docker-compose.yml
build:
context: .
args:
FOO: BAR
FOO1: BAR1
FOO2: BAR2
More info about args
Accessing .env values within the docker-compose.yml file
If you are looking into passing some values into your docker-compose file from the .env then you can simply put your .env file same location as the docker-compose.yml file and you can set the configuration values as below;
ports:
- "${HOST_PORT}:80"
So, as an example you can set the host port for the service by setting it in your .env file
Please check this
First, the error you're seeing. I suspect there's a "not found" error message not included in the question. If that's the case, then the first issue is that you tried to run an executable that is the full string since you enclosed it in quotes. Rather than trying to run the shell command "export", it is trying to find a binary that is the full string with spaces in it and all. So to work past that error, you'd need to unquote your RUN string:
RUN export FOO=$(echo "$(cut -d'=' -f2 <<< $(grep FOO .env))")
However, that won't solve your underlying problem. The result of a RUN command is that docker saves the changes to the filesystem as a new layer to the image. Only changes to the filesystem are saved. The shell command you are running changes the shell state, but then that shell exits, the run command returns, and the state of that shell, including environment variables, is gone.
To solve this for your application, there are two options I can think of:
Option A: inject build args into your build for all the .env values, and write a script that calls build with the proper --build-arg flag for each variable. Inside the Dockerfile, you'll have two lines for each variable:
ARG FOO1=default value1
ARG FOO2=default value2
ENV FOO1=${FOO1} \
FOO2=${FOO2}
Option B: inject your .env file and process it with an entrypoint in your container. This entrypoint could run your export command before kicking off the actual application. You'll also need to do this for each RUN command during the build where you need these variables. One shorthand I use for pulling in the file contents to environment variables is:
set -a && . .env && set +a

Resources