Using aws-cli in cronjob - bash

I'm trying to run aws sts get-caller-identity in a cronjob, however this results in /bin/sh: 1: aws: not found
spec:
containers:
- command:
- /bin/sh
- -c
- aws sts get-caller-identity

As already mentioned in the comments, it seems that the AWS CLI is not installed in the image that your are using for this cronjob. You need to provide more information!
If you are the owner of the used image, just install the AWS CLI within the Dockerfile. If you are not the owner, just create your own image, extend it from the image you are currently using and install the AWS CLI.
For example, if you are using an Alpine based image, just create a Dockerfile
FROM <THE_ORIGINAL_IMAGE>:<TAG>
RUN apk add --no-cache python3 py3-pip && \
pip3 install --upgrade pip && \
pip3 install awscli
Then build the image and push it to DockerHub for an example.
Now you can use this new image in your CronJob resource.
BUT, the next thing is that your CronJob Pod needs access to execute the AWS STS service. There are multiple possibilities to get this done. The best way is to use IRSA (IAM Roles for Service Accounts) Just check this blog article https://aws.amazon.com/de/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/
If you still need help, just provide more details.

Step 1:
You need add secrets key to kubernetes secrets:
kubectl create secret generic aws-credd --from-literal=AWS_SECRET_ACCESS_KEY=xxxxxxxxx --from-literal=AWS_ACCESS_KEY_ID=xxxxx
Step 2: copy this to -> cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: aws-cli-sync
labels:
app: aws-cli-sync
spec:
schedule: "0 17 * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: aws-cli-sync
image: mikesir87/aws-cli
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-cred
key: AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-cred
key: AWS_SECRET_ACCESS_KEY
args:
- /bin/sh
- -c
- date;aws s3 sync s3://xxx-backup-prod s3://elk-xxx-backup
restartPolicy: Never
Step 3: add job in namespaces there you add key
kubectl apply -f ./cronjob.yaml

Related

Kibana with plugins running on Kubernetes

I'm trying to install Kibana with a plugin via the initContainers functionality and it doesn't seem to create the pod with the plugin in it.
The pod gets created and Kibana works perfectly, but the plugin is not installed using the yaml below.
initContainers Documentation
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 7.11.2
count: 1
elasticsearchRef:
name: quickstart
podTemplate:
spec:
initContainers:
- name: install-plugins
command:
- sh
- -c
- |
bin/kibana-plugin install https://github.com/fbaligand/kibana-enhanced-table/releases/download/v1.11.2/enhanced-table-1.11.2_7.11.2.zip
Got Kibana working with plugins by using a custom container image
dockerfile
FROM docker.elastic.co/kibana/kibana:7.11.2
RUN /usr/share/kibana/bin/kibana-plugin install https://github.com/fbaligand/kibana-enhanced-table/releases/download/v1.11.2/enhanced-table-1.11.2_7.11.2.zip
RUN /usr/share/kibana/bin/kibana --optimize
yaml
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 7.11.2
image: my-conatiner-path/kibana-with-plugins:7.11.2
count: 1
elasticsearchRef:
name: quickstart
Building you own image would sure work, though it could be avoided in that case.
Your initContainer is pretty much what you were looking for.
With one exception: you need to add some emptyDir volume.
Mount it to both your initContainer and regular kibana container, sharing the plugins you would install during init.
Although I'm not familiar with the Kibana CR, here's how I would do this with elasti.co official images:
spec:
template:
spec:
containers:
- name: kibana
image: official-kibana:x.y.z
securityContext:
runAsUser: 1000
volumeMounts:
- mountPath: /usr/share/kibana/plugins
name: plugins
initContainers:
- command:
- /bin/bash
- -c
- |
set -xe
if ! ./bin/kibana-plugin list | grep prometheus-exporter >/dev/null; then
if ! ./bin/kibana-plugin install "https://github.com/pjhampton/kibana-prometheus-exporter/releases/download/7.12.1/kibanaPrometheusExporter-7.12.1.zip"; then
echo WARNING: failed to install Kibana exporter plugin
fi
fi
name: init
image: official-kibana:x.y.z
securityContext:
runAsUser: 1000
volumeMounts:
- mountPath: /usr/share/kibana/plugins
name: plugins
volumes:
- emptyDir: {}
name: plugins

AWS user Data custom ami support in amazon eks managed nodegroups

I'm not able to create node group using yaml file inside yaml file it contains bootstrap.sh to create node group, here the file
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: ginny
region: us-west-2
version: '1.17'
managedNodeGroups:
- name: ginny-mng-custom-ami
instanceType: t3.small
desiredCapacity: 2
labels: {role: worker}
ami: ami-0030109261aa0205b
ssh:
publicKeyName: bastion
preBootstrapCommands:
- kubelet --version > /etc/eks/test-preBootstrapCommands
overrideBootstrapCommand: |
#!/bin/bash
set -ex
/etc/eks/bootstrap.sh ginny --kubelet-extra-args "--node-labels=alpha.eksctl.io/cluster-name=ginny,alpha.eksctl.io/nodegroup-name=ginny-mng-custom-ami,eks.amazonaws.com/nodegroup=ginny-mng-custom-ami,eks.amazonaws.com/nodegroup-image=ami-0030109261aa0205b"
[root#ip-1-2-3-4 eks-node-group]# eksctl create nodegroup --config-file maanged-nodegroup.yaml
Error: couldn't create node group filter from command line options: loading config file "maanged-nodegroup.yaml": error converting YAML to JSON: yaml: line 15: mapping values are not allowed in this context
Try this way It should work:
preBootstrapCommands:
- kubelet --version > /etc/eks/test-preBootstrapCommands
overrideBootstrapCommand: |
#!/bin/bash
set -ex
/etc/eks/bootstrap.sh ginny --kubelet-extra-args "--node-labels=alpha.eksctl.io/cluster-name=ginny,alpha.eksctl.io/nodegroup-name=ginny-mng-custom-ami,eks.amazonaws.com/nodegroup=ginny-mng-custom-ami,eks.amazonaws.com/nodegroup-image=ami-0030109261aa0205b"

Kubernetes bash to POD after creation

I tried to create the POD using command
kubectl run --generator=run-pod/v1 mypod--image=myimage:1 -it bash and after successful pod creation it prompts for bash command in side container.
Is there anyway to achieve above command using YML file? I tried below YML but it does not go to bash directly after successful creation of POD. I had to manually write command kubectl exec -it POD_NAME bash. But want to avoid using exec command to bash my container. I want my YML to take me to my container after creation of POD. is there anyway to achieve this?
apiVersion: v1
kind: Pod
metadata:
name: mypod
namespace: mynamespcae
labels:
app: mypod
spec:
containers:
- args:
- bash
name: mypod
image: myimage:1
stdin: true
stdinOnce: true
tty: true
This is a community wiki answer. Feel free to expand it.
As already mentioned by David, it is not possible to go to bash directly after a Pod is created by only using the YAML syntax. You have to use a proper kubectl command like kubectl exec in order to Get a Shell to a Running Container.
The key you want to have the pod that will not exit.
Here is an example for you.
apiVersion: v1
kind: Pod
metadata:
name: mypod
namespace: mynamespcae
labels:
app: mypod
spec:
containers:
- command:
- bash
- -c
- yes > /dev/null
name: mypod
image: myimage:1
The command yes will continue to output the string yes until it is killed.
The part > /dev/null will make sure that you won't have a ton of garbage logs.
Then you can access your pod with these commands.
kubectl apply -f my-pod.yaml
kubectl exec -it mypod bash
Remember to remove the pod after you finish all the operations.

How to inject secret from Google Secret Manager into Kubernetes Pod as environment variable with Spring Boot?

For the life of Bryan, how do I do this?
Terraform is used to create an SQL Server instance in GCP.
Root password and user passwords are randomly generated, then put into the Google Secret Manager.
The DB's IP is exposed via private DNS zone.
How can I now get the username and password to access the DB into my K8s cluster? Running a Spring Boot app here.
This was one option I thought of:
In my deployment I add an initContainer:
- name: secrets
image: gcr.io/google.com/cloudsdktool/cloud-sdk
args:
- echo "DB_PASSWORD=$(gcloud secrets versions access latest --secret=\"$NAME_OF_SECRET\")" >> super_secret.env
Okay, what now? How do I get it into my application container from here?
There are also options like bitnami/sealed-secrets, which I don't like since the setup is using Terraform already and saving the secrets in GCP. When using sealed-secrets I could skip using the secrets manager. Same with Vault IMO.
On top of the other answers and suggestion in the comments I would like to suggest two tools that you might find interesting.
First one is secret-init:
secrets-init is a minimalistic init system designed to run as PID 1
inside container environments and it`s integrated with
multiple secrets manager services, e.x. Google Secret Manager
Second one is kube-secrets-init:
The kube-secrets-init is a Kubernetes mutating admission webhook,
that mutates any K8s Pod that is using specially prefixed environment
variables, directly or from Kubernetes as Secret or ConfigMap.
It`s also support integration with Google Secret Manager:
User can put Google secret name (prefixed with gcp:secretmanager:) as environment variable value. The secrets-init will resolve any environment value, using specified name, to referenced secret value.
Here`s a good article about how it works.
How do I get it into my application container from here?
You could use a volume to store the secret and mount the same volume in both init container and main container to share the secret with the main container from the init container.
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: secrets
image: gcr.io/google.com/cloudsdktool/cloud-sdk
args:
- echo "DB_PASSWORD=$(gcloud secrets versions access latest --secret=\"$NAME_OF_SECRET\")" >> super_secret.env
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
emptyDir: {}
You can use spring-cloud-gcp-starter-secretmanager to load secrets from Spring application itself.
Documentation - https://cloud.spring.io/spring-cloud-gcp/reference/html/#secret-manager
Using volumes of emptyDir with medium: Memory to guarantee that the secret will not be persisted.
...
volumes:
- name: scratch
emptyDir:
medium: Memory
sizeLimit: "1Gi"
...
If one has control over the image, it's possible to change the entry point and use berglas.
Dockerfile:
FROM adoptopenjdk/openjdk8:jdk8u242-b08-ubuntu # or whatever you need
# Install berglas, see https://github.com/GoogleCloudPlatform/berglas
RUN mkdir -p /usr/local/bin/
ADD https://storage.googleapis.com/berglas/main/linux_amd64/berglas /usr/local/bin/berglas
RUN chmod +x /usr/local/bin/berglas
ENTRYPOINT ["/usr/local/bin/berglas", "exec", "--"]
Now we build the container and test it:
docker build -t image-with-berglas-and-your-app .
docker run \
-v /host/path/to/credentials_dir:/root/credentials \
--env GOOGLE_APPLICATION_CREDENTIALS=/root/credentials/your-service-account-that-can-access-the-secret.json \
--env SECRET_TO_RESOLVE=sm://your-google-project/your-secret \
-ti image-with-berglas-and-your-app env
This should print the environment variables with the sm:// substituted by the actual secret value.
In K8s we run it with Workload Identity, so the K8s service account on behalf of which the pod is scheduled needs to be bound to a Google service account that has the right to access the secret.
In the end your pod description would be something like this:
apiVersion: v1
kind: Pod
metadata:
name: your-app
spec:
containers:
- name: your-app
image: image-with-berglas-and-your-app
command: [start-sql-server]
env:
- name: AXIOMA_PASSWORD
value: sm://your-google-project/your-secret

Correctly override "settings.xml" in Jenkinsfile Maven build on kubernetes?

We are setting up a Jenkins-based CI pipeline on our Kubernetes cluster (Rancher if that matters) and up to now we have used the official maven:3-jdk-11-slim image for experiments. Unfortunately it does not provide any built-in way of overriding the default settings.xml to use a mirror, which we need - preferably just by setting an environment variable. I am not very familar with kubernetes so I may be missing something simple.
Is there a simple way to add a file to the image? Should I use another image with this functionality built in?
pipeline {
agent {
kubernetes {
yaml """
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: maven
image: maven:3-jdk-11-slim
command:
- cat
tty: true
- name: kaniko
.... etc
Summary: you can mount your settings.xml file on the pod at some specific path and use that file with command mvn -s /my/path/to/settings.xml.
Crou's ConfigMap approach is one way to do it. However, since the settings.xml file usually contains credentials, I would treat it as Secrets.
You can create a Secret in Kubernetes with command:
$ kubectl create secret generic mvn-settings --from-file=settings.xml=./settings.xml
The pod definition will be something like this:
apiVersion: v1
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: maven
image: maven:3-jdk-11-slim
command:
- cat
tty: true
volumeMounts:
- name: mvn-settings-vol
mountPath: /my/path/to
volumes:
- name: mvn-settings-vol
secret:
secretName: mvn-settings
Advanced/Optional: If you practice "Infrastructure as Code", you might want to save the manifest file for that secret for recovery. This can be achieved by this command after secret already created:
$ kubectl get secrets mvn-settings -o yaml
You can keep secrets.yml file but do not check into any VCS/Github repo since this version of secrets.yml contains unencrypted data.
Some k8s administrators may have kubeseal installed. In that case, I'd recommend using kubeseal to get encrypted version of secrets.yml.
$ kubectl create secret generic mvn-settings --from-file=settings.xml=./settings.xml --dry-run -o json | kubeseal --controller-name=controller --controller-namespace=k8s-sealed-secrets --format=yaml >secrets.yml
# Actually create secrets
$ kubectl apply -f secrets.yml
The controller-name and controller-namespace should be obtained from k8s administrators.
This secrets.yml contains encrypted data of your settings.xml and can be safely checked into VCS/Github repo.
If you want to override a file inside pod you can use ConfigMap to store the changed file and mount it instead of previous one.
You can create the ConfigMap from a file using
kubectl create configmap settings-xml --from-file=settings.xml
Your pod definition might look like this:
apiVersion: v1
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: maven
image: maven:3-jdk-11-slim
command:
- cat
tty: true
volumeMounts:
- name: config-settings
mountPath: /usr/share/maven/ref/settings.xml
volumes:
- name: config-settings
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: settings-xml
...
This worked for me:
Install Config File Provider Plugin
Go to Manage Jenkins > Config File Management > Add a new config and insert here your settings.xml
In your jenkinsfile just put your rtMavenRun inside a configFileProvider block, and put the same fileId of the jenkins config file you created before
stage('Build Maven') {
steps {
configFileProvider([configFile(fileId: 'MavenArtifactorySettingId', variable: 'MAVEN_SETTINGS_XML')]) {
retry(count: 3) {
rtMavenRun(
tool: "Maven 3.6.2", //id specified in Global Tool Configuration
pom: 'pom.xml',
goals: '-U -s $MAVEN_SETTINGS_XML clean install',
)
}
}
}
}
this is exactly the pipeline that I used if you want to see more: https://gist.github.com/robertobatts/42da9069e13b61a238f51c36754de97b
If you versioned the settings.xml of the project with the code, it makes sense to build with mvn install -s settings.xml using sh step. It what I did at work. If settings.xml is not versioned with the project, it indeed makes sens to mount the file with Crou's solution.
To answer your question "Should I use another image with this functionality built in?" I would recommend to avoid a maximum to build custom images because you will end up having to maintain them

Resources