I want to use the direct translation from k8s secret-keys to SpringBoot properties.
Therefore I have a helm chart (but similar with plain k8s):
apiVersion: v1
data:
app.entry[0].name: {{.Values.firstEntry.name | b64enc }}
kind: Secret
metadata:
name: my-secret
type: Opaque
With that my intention is that this behaves as if I'd set the spring property file:
app.entry[0].name: "someName"
But when I do this I get an error:
Invalid value: "[app.entry[0].name]": a valid config key must consist of alphanumeric characters, '-', '_' or '.' (e.g. 'key.name', or 'KEY_NAME', or 'key-name', regex used for validation is '[-._a-zA-Z0-9]+'),
So, [0] seems not to be allowed as a key name for the secrets.
Any idea how I can inject an array entry into spring directly from a k8s secret name?
Shooting around wildly I tried these that all failed:
app.entry[0].name: ... -- k8s rejects '['
app.entry__0.name: ... -- k8s ok, but Spring does not recognize this as array (I think)
"app.entry[0].name": ... -- k8s rejects '['
'app.entry[0].name': ... -- k8s rejects '['
You should be able to use environnment variables like described in sprint-boot-env.
app.entry[0].name property will be set using APP_ENTRY_0_NAME environment variable. This could be set in your deployment.
Using secret like:
apiVersion: v1
data:
value: {{.Values.firstEntry.name | b64enc }}
kind: Secret
metadata:
name: my-secret
type: Opaque
and then use it with
env:
- name: APP_ENTRY_0_NAME
valueFrom:
secretKeyRef:
name: my-secret
key: value
What you can do is passing the application.properties file specified within a k8s Secret to your Spring Boot application.
For instance, define your k8s Opaque Secret this way:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: my-secret
data:
application.properties: "app.entry[0].name={{ .Values.firstEntry.name }}"
Of course you will have more properties that you want to set in your application.properties file, so just see this as an example with the type of entry that you need to specify, as stated in your question. I'm not a Spring Boot specialist, but an idea could be (if possible) to tell the Spring Boot application to look for more than a single application.properties file so that you would only need to pass some of the configuration parameters from the outside in instead of all of the parameters.
When using kubernetes secrets as files in pods, as specified within the official kubernetes documentation, each key in the secret data map becomes a filename under a volume mountpath (See point 4).
Hence, you can just mount the application.properties file defined within your k8s secret into your container in which your Spring Boot application is running. Assuming that you make use of a deployment template in your helm chart, here is a sample deployment.yaml template would do the job (please focus on the part where the volumes and volumeMount are specified):
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "sample.fullname" . }}
labels:
{{- include "sample.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "sample.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "sample.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "sample.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: my-awesome-volume
mountPath: /path/where/springboot/app/expects/application.properties
subPath: application.properties
volumes:
- name: my-awesome-volume
secret:
secretName: my-secret
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
As desired, this gives you a solution with no necessity of changing any of your application code. I hope that this gets you going in the intended way.
You can do saving json file as a secret
Step 1:
Create json file which needs to be stored as secret example : secret-data.json
{
"entry": [
{
"name1": "data1",
"key1": "dataX"
},
{
"name2": "data2",
"key2": "dataY"
}
]
}
Step2 : Create a secret from a file
kubectl create secret generic data-1 --from-file=secret-data.json
Step 3: Attach secret to pod
env:
- name: APP_DATA
valueFrom:
secretKeyRef:
name: data-1
key: secret-data.json
You can verify the same by exec into container and checking env
Related
Could anybody please help me understand this command - What would be the output of this key -value pair:
JAVA_OPTS_APPEND: {{ printf "-Djgroups.dns.query=%s-headless.%s.svc.%s" (include "common.names.fullname" .) (include "common.names.namespace" .) .Values.clusterDomain | quote }}
where
common.names.fullname: ""
common.names.namespace: ""
clusterDomain: cluster.local
This piece of code is from here:
https://github.com/bitnami/charts/blob/main/bitnami/keycloak/templates/configmap-env-vars.yaml
I am fairly new to Kubernetes and I am trying to understand what would be the value of JAVA_OPTS_APPEND.
Thanks in advance.
Nafee
You can render helm templates locally with helm template command, this will render your values so you see the outputs of this command.
If you don't enough permissions on your Kubernetes cluster, you can spin a local mininkube or kind instance and then render the template:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm template bitnami/keycloak --namespace mhajeb
In the rendered manifest you will find the following ConfigMap:
# Source: keycloak/templates/configmap-env-vars.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: release-name-keycloak-env-vars
namespace: "mhajeb"
labels:
app.kubernetes.io/name: keycloak
helm.sh/chart: keycloak-13.0.4
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: keycloak
data:
KEYCLOAK_ADMIN: "user"
KEYCLOAK_HTTP_PORT: "8080"
KEYCLOAK_PROXY: "passthrough"
KEYCLOAK_ENABLE_STATISTICS: "false"
KEYCLOAK_DATABASE_HOST: "release-name-postgresql"
KEYCLOAK_DATABASE_PORT: "5432"
KEYCLOAK_DATABASE_NAME: "bitnami_keycloak"
KEYCLOAK_DATABASE_USER: "bn_keycloak"
KEYCLOAK_PRODUCTION: "false"
KEYCLOAK_ENABLE_HTTPS: "false"
KEYCLOAK_CACHE_TYPE: "ispn"
KEYCLOAK_CACHE_STACK: "kubernetes"
JAVA_OPTS_APPEND: "-Djgroups.dns.query=release-name-keycloak-headless.mhajeb.svc.cluster.local"
KEYCLOAK_LOG_OUTPUT: "default"
KC_LOG_LEVEL: "INFO"
Now note that JAVA_OPTS_APPEND: {{ printf "-Djgroups.dns.query=%s-headless.%s.svc.%s" (include "common.names.fullname" .) (include "common.names.namespace" .) .Values.clusterDomain | quote }} rendered:
JAVA_OPTS_APPEND: "-Djgroups.dns.query=release-name-keycloak-headless.mhajeb.svc.cluster.local"
And that was done with printf function which rendered common.names.fullname and common.names.namespacce from the templates helpers which are defined in the "parent" chart:
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "common.names.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
and
{{/*
Allow the release namespace to be overridden for multi-namespace deployments in combined charts.
*/}}
{{- define "common.names.namespace" -}}
{{- default .Release.Namespace .Values.namespaceOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
TLDR;
It is taking the Chart, Release names as defaults. And if you want to override them have a look at the documentation: https://github.com/bitnami/charts/tree/main/bitnami/keycloak#common-parameters, or the templates :), and just set:
fullnameOverride String to fully override common.names.fullname
namespaceOverride String to fully override common.names.namespace
Other examples
helm template my-food-release bitnami/keycloak --namespace mhajeb
Result:
# Source: keycloak/templates/configmap-env-vars.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-food-release-keycloak-env-vars
namespace: "mhajeb"
labels:
app.kubernetes.io/name: keycloak
helm.sh/chart: keycloak-13.0.4
app.kubernetes.io/instance: my-food-release
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: keycloak
data:
KEYCLOAK_DATABASE_HOST: "my-food-release-postgresql"
...
JAVA_OPTS_APPEND: "-Djgroups.dns.query=my-food-release-keycloak-headless.mhajeb.svc.cluster.local"
...
helm template my-food-release bitnami/keycloak --namespace mhajeb --set fullnameOverride=daNewName --set namespaceOverride=daNewNamespaceOverride
Result:
# Source: keycloak/templates/configmap-env-vars.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: daNewName-env-vars
namespace: "daNewNamespaceOverride"
labels:
app.kubernetes.io/name: keycloak
helm.sh/chart: keycloak-13.0.4
app.kubernetes.io/instance: my-food-release
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: keycloak
data:
...
JAVA_OPTS_APPEND: "-Djgroups.dns.query=daNewName-headless.daNewNamespaceOverride.svc.cluster.local"
...
As a stepping stone to a more complicated Problem, I have been following this example: https://blog.gopheracademy.com/advent-2017/kubernetes-ready-service/, step by step. The next step that I have been trying to learn is using Helm files to deploy the Golang service instead of a makefile.
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .ServiceName }}
labels:
app: {{ .ServiceName }}
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 50%
maxSurge: 1
template:
metadata:
labels:
app: {{ .ServiceName }}
spec:
containers:
- name: {{ .ServiceName }}
image: docker.io/<my Dockerhub name>/{{ .ServiceName }}:{{ .Release }}
imagePullPolicy: Always
ports:
- containerPort: 8000
livenessProbe:
httpGet:
path: /healthz
port: 8000
readinessProbe:
httpGet:
path: /readyz
port: 8000
resources:
limits:
cpu: 10m
memory: 30Mi
requests:
cpu: 10m
memory: 30Mi
terminationGracePeriodSeconds: 30
to a helm deployment.yaml that looks like
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "mychart.fullname" . }}
labels:
{{- include "mychart.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "mychart.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
ano 5.4 mychart/templates/deployment.yaml
{{- include "mychart.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "mychart.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 8000
readinessProbe:
httpGet:
path: /readyz
port: 8000
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
However, when I run the helm chart, the probes (which when not using helm work perfectly fine) fail with errors - specifically when describing the pod, I get the error "Warning Unhealthy 16s (x3 over 24s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 503"
I have obviously set up the probes wrong on the helm chart. How do I convert these probes from one system to another?
Solution:
The solution I found was that probes in the Helm Charts were initial time delays. When I replaced
livenessProbe:
httpGet:
path: /healthz
port: 8000
readinessProbe:
httpGet:
path: /readyz
port: 8000
with
livenessProbe:
httpGet:
path: /healthz
port: 8000
initialDelaySeconds: 15
readinessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 15
Because the probes where running before the container was fully started, they were automatically concluding that they failed.
I have a simple api that I've been deploying to K8s as a NodePort service via Helm. I'm working to add an ingress to the Helm chart but I'm having some difficulty getting the variables correct
values.yaml
ingress:
metadata:
name: {}
labels: {}
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: "testapi.local.dev"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: {}
port:
number: 80
templates/ingress.yaml, showing only the spec section where I'm having issues.
spec:
rules:
{{- range .Values.ingress.spec.rules }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path | quote }}
pathType: {{ .pathType | quote }}
backend:
service:
name: {{ include "testapi.service.name" . }}
port:
{{- range $key, $value := (include "testapi.deployment.ports" . | fromYaml) }}
number: {{ .port }}
{{- end}}
{{- end}}
{{- end}}
When running helm template it just leaves these values blank and I'm not sure where the syntax is wrong. Removing the {{- range .paths }} and the following .path and .pathType and replacing them with the value corrects the issue
spec:
rules:
- host: "testapi.local.dev"
http:
paths:
Comments revealed I should be using {{- range .http.paths }}.
Is it possible to send a http Rest request to another K8 Pod that belongs to the same Service in Kubernetes?
E. G.
Service name = UserService , 2 Pods (replica = 2)
Pod 1 --> Pod 2 //using pod ip not load balanced hostname
Pod 2 --> Pod 1
The connection is over Rest GET 1.2.3.4:7079/user/1
The value for host + port is taken from kubectl get ep
Both of the pod IP's work successfully outside of the pods but when I do a kubectl exec -it into the pod and make the request via CURL, it returns a 404 not found for the endpoint.
Q What I would like to know if it is possible to make a request to another K8 Pod that is in the same Service?
Q Why am I able to get a successful ping 1.2.3.4, but not hit the Rest API?
below is my config files
#values.yml
replicaCount: 1
image:
repository: "docker.hosted/app"
tag: "0.1.0"
pullPolicy: Always
pullSecret: "a_secret"
service:
name: http
type: NodePort
externalPort: 7079
internalPort: 7079
ingress:
enabled: false
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "app.fullname" . }}
labels:
app: {{ template "app.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ template "app.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_PORT
value: "{{ .Values.service.internalPort }}"
ports:
- containerPort: {{ .Values.service.internalPort }}
livenessProbe:
httpGet:
path: /actuator/alive
port: {{ .Values.service.internalPort }}
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /actuator/ready
port: {{ .Values.service.internalPort }}
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
imagePullSecrets:
- name: {{ .Values.image.pullSecret }
service.yml
kind: Service
metadata:
name: {{ template "app.fullname" . }}
labels:
app: {{ template "app.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.externalPort }}
targetPort: {{ .Values.service.internalPort }}
protocol: TCP
name: {{ .Values.service.name }}
selector:
app: {{ template "app.name" . }}
release: {{ .Release.Name }}
executed from master
executed from inside a pod of the same MicroService
Is it possible to send a http Rest request to another K8 Pod that belongs to the same Service in Kubernetes?
For sure, yes, that's actually exactly why every Pod in the cluster has a cluster-wide routable address. You can programmatically ask kubernetes for the list of the Pod's "peers" by requesting the Endpoint object that is named the same as the Service, then subtract out your own Pod's IP address. It seems like you kind of knew that from kubectl get ep, but then you asked the question, so I thought I would be explicit that your experience wasn't an accident.
Q Why am I able to get a successful ping 1.2.3.4, but not hit the Rest API?
We can't help you troubleshoot your app without some app logs, but the fact that you got a 404 and not "connection refused" or 504 or such means your connectivity worked fine, it's just the app that is broken.
Yes as Mathew answered you can indeed communicate between pods in a Service with Kubernetes, the problem I was having was Istio was blocking the requests to each other.
Solution:
Disabling Istio injection solved this problem for me , I then enabled it after wards and load balancing continued, hopefully it can help some one in the future.
see answer here: Envoy Pod to Pod communication within a Service in K8
I want to put the following CRD into helm chart, but it contains go raw template. How to make helm not translate {{ and }} inside rawTemplate. Thanks for your response.
https://github.com/kubeflow/katib/blob/master/examples/random-example.yaml
apiVersion: "kubeflow.org/v1alpha1"
kind: StudyJob
metadata:
namespace: katib
labels:
controller-tools.k8s.io: "1.0"
name: random-example
spec:
studyName: random-example
owner: crd
optimizationtype: maximize
objectivevaluename: Validation-accuracy
optimizationgoal: 0.99
requestcount: 4
metricsnames:
- accuracy
workerSpec:
goTemplate:
rawTemplate: |-
apiVersion: batch/v1
kind: Job
metadata:
name: {{.WorkerId}}
namespace: katib
spec:
template:
spec:
containers:
- name: {{.WorkerId}}
image: katib/mxnet-mnist-example
command:
- "python"
- "/mxnet/example/image-classification/train_mnist.py"
- "--batch-size=64"
{{- with .HyperParameters}}
{{- range .}}
- "{{.Name}}={{.Value}}"
{{- end}}
{{- end}}
restartPolicy: Never
In the Go template language, the expression
{{ "{{" }}
will expand to two open curly braces, for cases when you need to use Go template syntax to generate documents in Go template syntax; for example
{{ "{{" }}- if .Values.foo }}
- name: FOO
value: {{ "{{" }} .Values.foo }}
{{ "{{" }}- end }}
(In a Kubernetes Helm context where you're using this syntax to generate YAML, be extra careful with how whitespace is handled; consider using helm template to dump out what gets generated.)