kubectl kustomize add annotation to multiple overlay yaml files - yaml

I am trying to merge some annotations in one file to multiple resources to keep it DRY and in order for pods to get information from a vault.
Generally I can add the following code to "mylogger" by using the kind: Deployment (which I presume will only allow me to get the info from this file into only the mylogger resource). After deployment the mylogger pod seems to be working, and can get the vault information.
Other information is that the project follows the base/overlay structure and uses kubectl and kustomize commands.
For the files...
vault-values.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mylogger
spec:
template:
metadata:
annotations:
inject-vault-value1: "path-to-vault-value1"
inject-vault-value2: "path-to-vault-value2"
The mylogger.yml resource file is
apiVersion: apps/v1
kind: Deployment
metadata:
name: mylogger
labels:
app: mylogger
spec:
replicas: 2
selector:
matchLabels:
app: mylogger
template:
metadata:
labels:
app: mylogger
spec:
initContainers:
.... and rest of file here
doing kubectl kustomize .../overlay/dev > manifest.yml
I can see the desired result in my manifest.yml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: mylogger
labels:
app: mylogger
spec:
replicas: 1
selector:
matchLabels:
app: mylogger
template:
metadata:
annotations:
inject-vault-value1: "path-to-vault-value1"
inject-vault-value2: "path-to-vault-value2"
labels:
app: mylogger
spec:
initContainers:
... rest if file
The part under spec > template > metadata > annotations > inject-vault-value1 is there.
Is it possible to use the vault-value.yml file and insert its contents into for example myjob resource? Basically the part from spec and down, to its annotations
myjob.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myjob
spec:
replicas: 1
template:
spec:
containers:
- name: myjob
env:
- name: random__env__variable
value: false
...rest of file here
Note: I want to use the file in the overlay folder as it has the correct vault information for that particular environment. I have nothing in base folder concerning the vault information or the vault yaml file.
Thought the command "patchesStrategicMerge" would come in handy, but for the kustomize command it seems only doable for a base/overlay contents

How to best accomplish your goal depends on how your project is structured, but one option is to use a Kustomize patch, like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# This points to where you're loading your `mylogger` and `myjob` deployments
resources:
- ...
patches:
- target:
kind: Deployment
patch: |
apiVersion: apps/v1
kind: Deployment
metadata:
name: this-is-ignored
spec:
template:
metadata:
annotations:
inject-vault-value1: "path-to-vault-value1"
inject-vault-value2: "path-to-vault-value2"
This will apply your two custom annotations to all deployments generated by this kustomization.yaml file. If you need to limit it to specific deployments, you can use a pattern expression or label selector to match the appropriate objects.

Related

How to read spring boot configuration file in Kubernetes deployment

Im new in Kubernetes and having a hard time making to read application.properties in the deployment. I have attached our ConfigMap as a mounted volume under the /config path.
This is my deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: 34343434.dkr.ecr.asia-2.amazonaws.com/myapp:latest
ports:
- containerPort: 80
volumeMounts:
- name: application-properties
mountPath: /config
volumes:
- name: application-properties
configMap:
name: application-properties
I have created configmap using kubectl command from a file that is located in my local computer.
kubectl create configmap application-properties -–from-file=/users/me/application.properties
Now the issue is the application.property file which i am setting it using the kubectl configmap is not getting picked up. Can you help me on this?
Based on the discussion, the issue was the configmap, instead of the property file, it was rendered as a string in the configmap.
kubectl get configmap application-properties -o yaml
>shows the contents but with all in oneline format. separated by \n
Converting it to YAML application.yml did the trick.

Fleet Server In Elastic Error : elastic-agent-cluster-leader is forbidden

We are setting up a fleet server in Kubernetes.
It has been given a CA and states its running but we cannot shell into it, and the logs are nothing but the following:
E0817 09:12:10.074969 927 leaderelection.go:330] error retrieving
resource lock default/elastic-agent-cluster-leader:
leases.coordination.k8s.io "elastic-agent-cluster-leader" is
forbidden: User "system:serviceaccount:default:elastic-agent" cannot
get resource "leases" in API group "coordination.k8s.io" in the
namespace "default"
I can find very little information on this ever happening let alone a resolution. Any information pointing to a possible resolution would be massively helpful!
You need to make sure that you have applied the ServiceAccount, ClusterRoles and ClusterRoleBindings from the setup files.
An example of these can be found in the quickstart documentation.
https://www.elastic.co/guide/en/cloud-on-k8s/2.2/k8s-elastic-agent-fleet-quickstart.html
Service Account
kind: ServiceAccount
metadata:
name: elastic-agent
namespace: default
Cluster Role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: elastic-agent
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- pods
- nodes
- namespaces
verbs:
- get
- watch
- list
- apiGroups: ["coordination.k8s.io"]
resources:
- leases
verbs:
- get
- create
- update
Cluster Role Binding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: elastic-agent
subjects:
- kind: ServiceAccount
name: elastic-agent
namespace: default
roleRef:
kind: ClusterRole
name: elastic-agent
apiGroup: rbac.authorization.k8s.io

Spring Boot - read container environment variables in properties file

I use:
Spring Boot
Microservices (containerized)
Docker
Kubernetes
My case is as follows:
I have to generate link:
https://dev-myapp.com OR https://qa-myapp.com
depending on the environment in which my service is running (DEV, QA)
Depending on the environment (DEV, QA). I have one Spring profile BUT under this profile my app can run in kubernetes on 2 types of environment: DEV or QA. I want to generate proper link - read it from my properties file:
#Value("${email.body}")
private String emailBody;
application.yaml:
email:
body: Click on the following URL: ${ENVIRONMENT_URL:}/edge/invitation?code={0}&email={1}
DEVOPS(Kubernetes):
Manifest in workloads folder (DEV branch, the same for qa branch nut this time with https://qa-myapp.com):
apiVersion: v1
kind: Service
...
...
apiVersion: apps/v1
kind: Deployment
...
...
containers:
env:
- name: ENVIRONMENT_URL
value: https://dev-myapp.com
So is i possible to read that value from kubernetes container in my Spring properties file? I want to get email.body property depending on the container my service is running on.
Yes this is possible and have corrected the syntax of the yaml
apiVersion: v1
kind: Service
...
...
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
command: ["/bin/sh", "-c", "env | grep ENVIRONMENT_URL"]
env:
- name: ENVIRONMENT_URL
value: https://myapp.com. #Indedntation Changed
ports:
- containerPort: 80

Can't get pod and service running with generated deployment and service descriptors

Following Ryan Baxter's Spring On Kubernetes workshop, I run into a problem I can't resolve. On the step of "Deploying To Kubernetes", after generating depoyment.yaml and services.yaml files, I run
kubectl apply -f ./k8s
and I get validation errors:
error validating "k8s/deployment.yaml": error validating data: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false
error validating "k8s/service.yaml": error validating data: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false
After running
kubectl apply -f ./k8s --validate=false
I get
error: unable to recognize "k8s/deployment.yaml": no matches for extensions/, Kind=Deployment
service"my-app" created
And here is the yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: my-app
name: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: my-app
spec:
containers:
- image: docker.io/my-id/my-app
name: my-app
resources: {}
status: {}
Based on Harsh's suggestion, I change the apiVersion to apps/v1 and run the kubectl apply command again.
deployment "my-app" created
service "my-app" configured
Based on what is shown in the watch, I run
kubectl port-forward svc/my-app 8080:80
where svc/my-app is shown in the watch. And it yields
error: invalid resource name svc/my-app: [may not contain '/']
To clean up, I run
kubectl delete -f ./k8s
And it yields
service "my-app" deleted
Error from server (NotFound): error when stopping "k8s/deployment.yaml": the server could not find the requested resource
I don't know whether those problems are caused by my operations errors or some bugs.
save this and deploy this file : kubectl apply -f filename.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: k8s-demo-app
name: k8s-demo-app
spec:
replicas: 1
selector:
matchLabels:
app: k8s-demo-app
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: k8s-demo-app
spec:
containers:
- image: harbor.workshop.demo.ryanjbaxter.com/user1/k8s-demo-app
name: k8s-demo-app
resources: {}
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: k8s-demo-app
name: k8s-demo-app
spec:
ports:
- name: 80-8080
port: 80
protocol: TCP
targetPort: 8080
selector:
app: k8s-demo-app
type: ClusterIP
status:
loadBalancer: {}
With help from Harsh and Chanseok, I upgrade gcloud components which kubectl is one of those components.
kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:30:10Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:07:13Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
I rerun those commands to deploy the server to the local cluster. It works!
I can't expose the service in the following step, though. An EXTERNAL-IP never shows up after the service.yaml modification. It is another problem.

ConfigMap data (yml format) - Kubernetes

I have an application.yml (Spring) file, which has almost 70 fields, want to move those fields to ConfigMap.
In the process of setup ConfigMap, have realized all the 70 fields has be flatened example : webservice.endpoint.transferfund
It's gonna be a painful task to convert all the 70 fields as flat, is there any alternative.
Please suggest.
Below Config is working:
apiVersion: v1
kind: ConfigMap
metadata:
name: configmapname
namespace: default
data:
webservice.endpoint.transferfund: http://www.customer-service.app/api/tf
webservice.endpoint.getbalance: http://www.customer-service.app/api/balance
webservice.endpoint.customerinfo: http://www.customer-service.app/api/customerinfo
Below config is not working, tried it as yml format.
apiVersion: v1
kind: ConfigMap
metadata:
name: configmapname
namespace: default
data:
application.yaml: |-
webservice:
endpoint:
transferfund: http://www.customer-service.app/api/tf
getbalance: http://www.customer-service.app/api/balance
customerinfo: http://www.customer-service.app/api/customerinfo
in src/main/resources/application.yml have below fields to access ConfigMap keys:
webservice:
endpoint:
transferfund: ${webservice.endpoint.transferfund}
getbalance: ${webservice.endpoint.getbalance}
customerinfo: ${webservice.endpoint.customerinfo}
Updated:
ConfigMap Description:
C:\Users\deskktop>kubectl describe configmap configmapname
Name: configmapname
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
application.yaml:
----
webservice:
endpoint:
transferfund: http://www.customer-service.app/api/tf
getbalance: http://www.customer-service.app/api/balance
customerinfo: http://www.customer-service.app/api/customerinfo
Events: <none>
Deployment script: (configMapRef name provided as configmap name as shown above)
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: configmap-sample
spec:
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: configmap-sample
spec:
containers:
- name: configmap-sample
image: <<image>>
ports:
- name: http-api
containerPort: 9000
envFrom:
- configMapRef:
name: configmapname
resources:
limits:
memory: 1Gi
requests:
memory: 768Mi
env:
- name: JVM_OPTS
value: "-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -Xms768M"
A ConfigMap is a dictionary of configuration settings. It consists of key-value pairs of strings. Kubernetes then adds those values to your containers.
In your case you have to make them flat, because Kubernetes will not understand them.
You can read in the documentation about Creating ConfigMap that:
kubectl create configmap <map-name> <data-source>
where is the name you want to assign to the ConfigMap and is the directory, file, or literal value to draw the data from.
The data source corresponds to a key-value pair in the ConfigMap, where
key = the file name or the key you provided on the command line, and
value = the file contents or the literal value you provided on the command line.
You can use kubectl describe or kubectl get to retrieve information about a ConfigMap.
EDIT
You could create a ConfigMap from a file with defined key.
Define the key to use when creating a ConfigMap from a file
Syntax might look like this:
kubectl create configmap my_configmap --from-file=<my-key-name>=<path-to-file>
And the ConfigMap migh look like the following:
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2019-07-03T18:54:22Z
name: my_configmap
namespace: default
resourceVersion: "530"
selfLink: /api/v1/namespaces/default/configmaps/my_configmap
uid: 05f8da22-d671-11e5-8cd0-68f728db1985
data:
<my-key-name>: |
key=value
key=value
key=value
key=value
Also I was able to find Create Kubernetes ConfigMaps from configuration files.
Functionality
The projector can:
Take raw files and stuff them into a ConfigMap
Glob files in your config repo, and stuff ALL of them in your configmap
Extract fields from your structured data (yaml/json)
Create new structured outputs from a subset of a yaml/json source by pulling out some fields and dropping others
Translate back and forth between JSON and YAML (convert a YAML source to a JSON output, etc)
Support for extracting complex fields like objects+arrays from sources, and not just scalars!
You need to mount the ConfigMap as Volume. Otherwise the content would live in environment variables. The example i post here is from https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
restartPolicy: Never
You mentioned, you're using the application.yaml in context of a Spring project. So if you don't care whether you use .yaml or .property configuration-files, you can just use property-files because configMap generation supports them. It works with the --from-env-file flag:
kubectl create configmap configmapname --from-env-file application.properties
So in your deployment-file you can directly access the keys:
...
env:
- KEYNAME
valueFrom:
configMapKeyRef:
name: configmapname
key: KeyInPropertiesFile

Resources