Use Kubectl Apply command using k8s.io package - go

I need to add kubectl apply functionality to my application.
I've looked through kubectl go-client, it has no provisions for the apply command.
Can I create an instance of kubectl in my go-application?
If not 1, can I use the k8s.io/kubernetes package to emulate an kubectl apply command?
Questions and clarifications if needed, will be given.

Can I create an instance of kubectl in my application?
You can wrap the kubectl command in your application and start it in a new child-process, like you would do via a shell-script. See the exec package in go for more information: https://golang.org/pkg/os/exec/
This works pretty good for us and kubectl usually has the -o-Parameter that lets you control the output format, so you get machine readable text back.
There are already some open-source projects that use this approach:
https://github.com/box/kube-applier
https://github.com/autoapply/autoapply
If not 1, can I use the k8s.io/kubernetes package to emulate an kubectl apply command?
Have you found https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/apply/apply.go while searching in kubectl-source code?
Take a look at the run-function:
func (o *ApplyOptions) Run() error {
...
r := o.Builder.
Unstructured().
Schema(o.Validator).
ContinueOnError().
NamespaceParam(o.Namespace).DefaultNamespace().
FilenameParam(o.EnforceNamespace, &o.DeleteOptions.FilenameOptions).
LabelSelectorParam(o.Selector).
IncludeUninitialized(o.ShouldIncludeUninitialized).
Flatten().
Do()
...
err = r.Visit(func(info *resource.Info, err error) error {
...
It is not very good readable it guess but this is what kubectl apply does.
Maybe one possible way of whould be to debug the code and see what is does further more.

This can be done by created and adding a plugin to kubectl.
You can write a plugin in any programming language or script that allows you to write command-line commands.
There is no plugin installation or pre-loading required. Plugin executables receive the inherited environment from the kubectl binary. A plugin determines which command path it wishes to implement based on its name. For example, a plugin wanting to provide a new command kubectl foo, would simply be named kubectl-foo, and live somewhere in the user’s PATH.
Example plugin can look as follows:
#!/bin/bash
# optional argument handling
if [[ "$1" == "version" ]]
then
echo "1.0.0"
exit 0
fi
# optional argument handling
if [[ "$1" == "config" ]]
then
echo $KUBECONFIG
exit 0
fi
echo "I am a plugin named kubectl-foo"
After that you just make it executable chmod +x ./kubectl-foo and move it for in your path mv ./kubectl-foo /usr/local/bin.
Now you should be able to call it by kubectl foo:
$ kubectl foo
I am a plugin named kubectl-foo
All args and flags are passed as-is to the executable:
$ kubectl foo version
1.0.0
You can read more about the kubectl plugins inside Kubernetes Extend kubectl with plugins documentation.

kubectl is supposed to be used from command line only
but you can wraped it around inside the code using some form of exec such os.system in python , similar will exist in golang import "os/exec", but this approach is dirty
you need to use the client libary in your code to carry out the operations
list of client libraries is here

Related

Running .sh file why does it pause with a : waiting for user input before continuing to run? [duplicate]

I am attempting to utilize the AWS CLI along with a for loop in bash to iteratively purge multiple SQS message queues. The bash script works almost as intended, the problem I am having is with the return value each time the AWS CLI sends a request. When the request is successful, it returns an empty value and opens up an interactive pager in the command line. I then have to manually type q to exit the interactive screen and allow the for loop to continue to the next iteration. This becomes very tedious and time consuming when attempting to purge a large number of queues.
Is there a way to configure AWS CLI to disable this interactive pager from popping up for every return value? Or a way to pipe the return values into a separate file instead of being displayed?
I have played around with configuring different return value types (text, yaml, JSON) but haven't had any luck. Also the --no-pagination parameter doesn't change the behavior.
Here's an example of the bash script I'm trying to run:
for x in 1 2 3; do
aws sqs purge-queue --queue-url https://sqs.<aws-region>.amazonaws.com/<id>/<env>-$x-<queueName>.fifo;
done
Just running into this issue myself, I was able to disable the behaviour by invoking the aws cli as AWS_PAGER="" aws ....
Alternatively you could simply export AWS_PAGER="" at the top of your (bash) script.
Source: https://github.com/aws/aws-cli/pull/4702#issue-344978525
You can also use --no-cli-pager in AWS CLI version 2.
See the "Client-side pager" section here https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-pagination.html
You can disable pager either by exporting AWS_PAGER="" or by modifying you AWS cli config file.
export AWS_PAGER=""
### or update your ~/.aws/config with
[default]
cli_pager=
Alternatively, you can enable the default pager to output of less program as
export AWS_PAGER="less"
or corresponding config change.
Ref: https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-pagination.html#cli-usage-pagination-clientside
You can set the environment variable PAGER to "cat" to force awscli to not start up less:
PAGER=cat aws sqs list-queues
I set up as a shell alias to make my life easier:
# ~/.zshrc
alias aws="PAGER=cat aws"
I am using the aws cli v2 via docker and passing the --env AWS_PAGER="" on the docker run command fixed this issue for me on windows 10 using git bash.
I set it up as an alias as well so things work with jq.
How to set your docker env values:
https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file
Example alias:
docker run --rm -it -v c:/users/me/.aws:/root/.aws --env AWS_PAGER="" amazon/aws-cli
Inside your ~/.aws/config file, add:
cli_pager=

Run bash then eval command on Docker container startup

I’m setting up a docker container to just be a simple environment for Ocaml, since I don’t wanna have to manage two OPAM tool chains on two computers. (Windows desktop, Linux laptop) My goal is to have the container load in to a bash command prompt on docker-compose run with ocaml ready to go, and to do this I need to enter in to bash and then run eval $(opam env) on startup. This is my current docker file:
FROM ocaml/opam:alpine-3.12
# Create folder and assign owner
USER root
RUN mkdir /code
WORKDIR /code
RUN chown opam:opam /code
USER opam
# Install ocaml
RUN opam init
RUN opam switch create 4.11.1
RUN opam install dune
# bash env
CMD [ "/bin/bash" ]
ENTRYPOINT [ "eval", "\$(opam env)" ]
Building and trying to run this gives me the error:
sh: $(opam env): unknown operand
ERROR: 2
I tried making a run.sh script but that ran into some chmod/permission issues that are probably harder to debug than this. What do I do to open this container in bash and then run the eval $(opam env) command? I don’t want to do this with command line arguments, I’d like to do this all in a dockerfile or docker-compose file
The trick is to use opam exec1 as the entry point, e.g.,
ENTRYPOINT ["opam", "exec", "--"]
Then you can either run directly a command from the installed switch or just start an interactive shell with run -it --rm <cont> sh and you will have the switch fully activated, e.g.,
$ docker run -it --rm binaryanalysisplatform/bap:latest sh
$ which ocaml
/home/opam/.opam/4.09/bin/ocaml
As an aside, since we're talking about docker and OCaml, let me share some more tricks. First of all, you can look into our collection of dockerfiles in BAP for some inspiration. And another important trick that I would like to share is using multistage builds to shrink the size of the image, here's an example Dockerfile. In our case, it gives us a reduction from 7.5 Gb to only 750 Mb, while still preserving the ability to run and build OCaml programs.
And another side note :) You also should run your installation in a single RUN entry, otherwise your layers will eventually diverge and you will get weird missing packages errors. Basically, here's the Dockerfile that you're looking for,
FROM ocaml/opam2:alpine
WORKDIR /home/opam
RUN opam switch 4.11.1 \
&& eval "$(opam env)" \
&& opam remote set-url default https://opam.ocaml.org \
&& opam update \
&& opam install dune \
&& opam clean -acrs
ENTRYPOINT ["opam", "exec", "--"]
1)Or opam config exec, i.e., ENTRYPOINT ["opam", "config", "exec", "--"] for the older versions of opam.
There's no way to tell Docker to do something after the main container process has started, or to send input to the main container process.
What you can do is to write a wrapper script that does some initial setup and then runs whatever the main container process is. Since that eval command will just set environment variables, those will carry through to the main shell.
#!/bin/sh
# entrypoint.sh
# Set up the version-manager environment
eval $(opam env)
# Run the main container command
exec "$#"
In the Dockerfile, make this script be the ENTRYPOINT:
COPY entrypoint.sh /usr/local/bin
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["/bin/bash"]
It also might work to put this setup in a shell dotfile, and run bash -l as the main container command to force it to read dotfiles. However, the $HOME directory isn't usually well-defined in Docker, so you might need to set that variable. If you expand this setup to run a full application, the entrypoint-wrapper approach will will there too, but that sequence probably won't read shell dotfiles at all.
What you show looks like an extremely straightforward installation sequence and I might not change it, but be aware that there are complexities around using version managers in Docker. In particular every Dockerfile RUN command has a new shell environment and the eval command won't "stick". I'd ordinarily suggest picking a specific version of the toolchain and directly installing it, maybe in /usr/local, without a version manager, but that approach will be much more complex than what you have currently. For more mainstream languages you can also usually use e.g. a node:16.13 prebuilt image.
What's with the error you're getting? For ENTRYPOINT and CMD (and also RUN) Docker has two forms. If something is a JSON array then Docker runs the command as a sequence of words, with one word in the array translating to one word in the command, and no additional interpretation or escaping. If it isn't a JSON array – even if it's mostly a JSON array, but has a typo – Docker will interpret it as a shell command and run it with sh -c. Docker applies this rule separately to the ENTRYPOINT and CMD, and then combines them together into a single command.
In particular in your ENTRYPOINT line, RFC 8259 §7 defines the valid character escapes in JSON, so \n is a newline and so on, but \$ is not one of those. That makes the embedded string invalid, and therefore the ENTRYPOINT line isn't valid, and Docker runs it via a shell. The single main container command is then
sh -c '[ "eval", "\$(opam env)" ]' '/bin/bash'
which runs the shell command [, as in if [ "$1" = yes ]; then ...; fi. That command doesn't understand the $(...) string as an argument, which is the error you're getting.
The JSON array already has escaped the things that need to be escaped, so it looks like you could get around this immediate error by removing the erroneous backslash
ENTRYPOINT ["eval", "$(opam env)"] # won't actually work
Docker will run this as-is, combining it with the CMD, and you get
'eval' '$(opam env)' '/bin/bash'
But eval isn't a "real" command – there is no /bin/eval binary – and Docker will pass on the literal string $(opam env) without interpreting it at all. That's also not what you want.
In principle it's possible to do this without writing a script, but you lose a lot of flexibility. For example, consider
# no ENTRYPOINT; shell-form CMD
CMD eval $(opam env) && exec /bin/bash
Again, though, if you replace this CMD with anything else you won't have done the initial setup step.

execute aws command in script with sudo

I am running a bash script with sudo and have tried the below but am getting the error below using aws cp. I think the problem is that the script is looking for the config in /root which does not exist. However doesn't the -E preserve the original location? Is there an option that can be used with aws cp to pass the location of the config. Thank you :).
sudo -E bash /path/to/.sh
- inside of this script is `aws cp`
Error
The config profile (name) could not be found
I have also tried `export` the name profile and `source` the path to the `config`
You can use the original user like :
sudo -u $SUDO_USER aws cp ...
You could also run the script using source instead of bash -- using source will cause the script to run in the same shell as your open terminal window, which will keep the same env together (such as user) - though honestly, #Philippe answer is the better, more correct one.

Scriptable args in docker-compose file

In my docker-compose file (docker-compose.yaml), I would like to set an argument based on a small shell script like this:
services:
backend:
[...]
build:
[...]
args:
PKG_NAME: $(dpkg -l <my_package>)
In the Dockerfile, I read this argument like this:
ARG PKG_NAME
First of all: I know that this approach is OS-dependent (requires dpkg), but for starters I would be happy to make it run on Debian. Also, it's fine it the value is an empty string.
However, docker-compose up throws this error:
ERROR: Invalid interpolation format for "build" option in service "backend": "$(dpkg -l <my_package>)"
Is there a way to dynamically specify an argument in the docker-compose file through a shell script (or another way)?
You can only use variable substitution as described in compose file documentation
You are trying to inject a shell construct and this is not supported.
The documentation has several examples on how to pass vars to compose file. In your case, you could:
export the var in your environment:
export MY_PACKAGE=$(dpkg -l <my_package>)
use that var in your compose file with default:
args:
PKG_NAME: "${MY_PACKAGE:-some_default_pkg}"

Managing multiple AWS accounts on the same computer

I have multiple AWS accounts, and depending on which project directory I'm in I want to use a different one when I type commands into the AWS CLI.
I'm aware that the AWS credentials can be passed in via environmental variables, so I was thinking that one solution would be to make it set AWS_CONFIG_FILE based on which directory it's in, but I'm not sure how to do that.
Using Mac OS X, The AWS version is aws-cli/1.0.0 Python/2.7.1 Darwin/11.4.2, and I'm doing this all for the purpose of utilize AWS in a Rails 4 app.
I recommend using different profiles in the configuration file, and just specify the profile that you want with:
aws --profile <your-profile> <command> <subcommand> [parameters]
If you don't want to type the profile for each command, just run:
export AWS_DEFAULT_PROFILE=<your-profile>
before a group of commands.
If you want to somehow automate the process of setting that environment variable when you change to a directory in your terminal, see Dynamic environment variables in Linux?
I think you can create an alias to aws command, which will export the AWS_CONFIG_FILE variable depending on the directory you are in. Something like following (bash) may work.
First create following shell script, lets call it match.sh and put it in /home/user/
export AWS_CONFIG_FILE=`if [[ "$PWD" =~ "MATCH" ]]; then echo "ABC"; else echo "DEF"; fi`
aws "$#"
Now define alias in ~/.bashrc script
alias awsdirbased="/home/user/match.sh $#"
Now whenever you wanna run "aws" comand instead run "awsdirbased" and it should work

Resources