Spring boot app dont reading values from 2nd file in configmap - spring

We are using implementation("org.springframework.cloud:spring-cloud-starter-kubernetes-client-config:2.1.1").
For configmap used 2 files:
application.yaml|yml,
application-parent.yaml|yml.
kind: ConfigMap
metadata:
name: {{ .Values.application }}
namespace: {{ .Release.Namespace }}
data:
application.yml: |
{{ (.Files.Glob "config/application.yaml").AsConfig | indent 2 }}
application-dev.yml: |
{{ (.Files.Glob "config/application-parent.yaml").AsConfig | indent 2 }}
After successful deploy and startup application reading values from application.yml, but ignoring values from application-parent.yml.
k8s console screenshot
What can i do wrong?

Related

How to set an SpringBoot array property as a kubernetes secret?

I want to use the direct translation from k8s secret-keys to SpringBoot properties.
Therefore I have a helm chart (but similar with plain k8s):
apiVersion: v1
data:
app.entry[0].name: {{.Values.firstEntry.name | b64enc }}
kind: Secret
metadata:
name: my-secret
type: Opaque
With that my intention is that this behaves as if I'd set the spring property file:
app.entry[0].name: "someName"
But when I do this I get an error:
Invalid value: "[app.entry[0].name]": a valid config key must consist of alphanumeric characters, '-', '_' or '.' (e.g. 'key.name', or 'KEY_NAME', or 'key-name', regex used for validation is '[-._a-zA-Z0-9]+'),
So, [0] seems not to be allowed as a key name for the secrets.
Any idea how I can inject an array entry into spring directly from a k8s secret name?
Shooting around wildly I tried these that all failed:
app.entry[0].name: ... -- k8s rejects '['
app.entry__0.name: ... -- k8s ok, but Spring does not recognize this as array (I think)
"app.entry[0].name": ... -- k8s rejects '['
'app.entry[0].name': ... -- k8s rejects '['
You should be able to use environnment variables like described in sprint-boot-env.
app.entry[0].name property will be set using APP_ENTRY_0_NAME environment variable. This could be set in your deployment.
Using secret like:
apiVersion: v1
data:
value: {{.Values.firstEntry.name | b64enc }}
kind: Secret
metadata:
name: my-secret
type: Opaque
and then use it with
env:
- name: APP_ENTRY_0_NAME
valueFrom:
secretKeyRef:
name: my-secret
key: value
What you can do is passing the application.properties file specified within a k8s Secret to your Spring Boot application.
For instance, define your k8s Opaque Secret this way:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: my-secret
data:
application.properties: "app.entry[0].name={{ .Values.firstEntry.name }}"
Of course you will have more properties that you want to set in your application.properties file, so just see this as an example with the type of entry that you need to specify, as stated in your question. I'm not a Spring Boot specialist, but an idea could be (if possible) to tell the Spring Boot application to look for more than a single application.properties file so that you would only need to pass some of the configuration parameters from the outside in instead of all of the parameters.
When using kubernetes secrets as files in pods, as specified within the official kubernetes documentation, each key in the secret data map becomes a filename under a volume mountpath (See point 4).
Hence, you can just mount the application.properties file defined within your k8s secret into your container in which your Spring Boot application is running. Assuming that you make use of a deployment template in your helm chart, here is a sample deployment.yaml template would do the job (please focus on the part where the volumes and volumeMount are specified):
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "sample.fullname" . }}
labels:
{{- include "sample.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "sample.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "sample.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "sample.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: my-awesome-volume
mountPath: /path/where/springboot/app/expects/application.properties
subPath: application.properties
volumes:
- name: my-awesome-volume
secret:
secretName: my-secret
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
As desired, this gives you a solution with no necessity of changing any of your application code. I hope that this gets you going in the intended way.
You can do saving json file as a secret
Step 1:
Create json file which needs to be stored as secret example : secret-data.json
{
"entry": [
{
"name1": "data1",
"key1": "dataX"
},
{
"name2": "data2",
"key2": "dataY"
}
]
}
Step2 : Create a secret from a file
kubectl create secret generic data-1 --from-file=secret-data.json
Step 3: Attach secret to pod
env:
- name: APP_DATA
valueFrom:
secretKeyRef:
name: data-1
key: secret-data.json
You can verify the same by exec into container and checking env

yq - is it possible to create a new key/value pair using an existing value whilst returning the whole yaml object?

Currently using yq (mikefarah/yq/ - version 4.27.2) and having trouble modifying inline an existing yaml file.
What I'm trying to do:
select the labels field
if labels contains a field named monitored_item, create a key named affectedCi and use the value of monitored_item
if labels does not contain a field named monitored_item, create a key named affectedCi with the value {{ $labels.affected_ci }}
return the whole yaml object with changes made inline
jobs-prometheusRule.yaml
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
labels:
app: jobs
prometheus: k8s
role: alert-rules
name: jobs-rules
namespace: ops
spec:
groups:
- name: job-detailed
rules:
- alert: JobInstanceFailed
annotations:
description: Please check the status of the {{ $labels.app_name }} job {{ $labels.job_name }} as it has failed.
summary: Job has failed
expr: (process_failed{context="job_failed"} + on(app_name, job_name) group_left(severity)(topk by(app_name, job_name) (1, property{context="job_max_allowed_failures"}))) == 1
for: 1m
labels:
monitored_item: '{{ $labels.app_name }} job {{ $labels.job_name }}'
severity: '{{ $labels.severity }}'
I've scoured through the docs and stack overflow with no luck - below is as far as i've been able to get.
yq command:
yq '(.spec.groups[].rules[] | select(.labels | has("monitored_item")) | .labels.affectedCi) |= .labels.monitored_item ' jobs-prometheusRule.yaml
The output returns the whole yaml object with the field affectedCi: null instead of the specified values
Anyone able to help?
You could use with to update, and // for fall-back:
yq 'with(
.spec.groups[].rules[] | select(.labels).labels;
.affectedCi = .monitored_item // "{{ $labels.affected_ci }}"
)' jobs-prometheusRule.yaml
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
labels:
app: jobs
prometheus: k8s
role: alert-rules
name: jobs-rules
namespace: ops
spec:
groups:
- name: job-detailed
rules:
- alert: JobInstanceFailed
annotations:
description: Please check the status of the {{ $labels.app_name }} job {{ $labels.job_name }} as it has failed.
summary: Job has failed
expr: (process_failed{context="job_failed"} + on(app_name, job_name) group_left(severity)(topk by(app_name, job_name) (1, property{context="job_max_allowed_failures"}))) == 1
for: 1m
labels:
monitored_item: '{{ $labels.app_name }} job {{ $labels.job_name }}'
severity: '{{ $labels.severity }}'
affectedCi: '{{ $labels.app_name }} job {{ $labels.job_name }}'

Assigning an environment variable's value to a helm variable

I'm new to helm and I want to be able to write gitlab project variables to files using config maps and shared environment variables.
I have a set of environment variables defined as gitlab project variables (the gitlab runner exposes them as environment variables) for each environment (where <ENV> is DEV/TEST/PROD for the sake of brevity):
MYSQL_USER_<ENV> = "user"
MYSQL_PASSWORD_<ENV> = "pass"
In the helm chart every environment has a map of its variables. For example, values.<ENV>.yaml contains:
envVars:
MYSQL_USER: $MYSQL_USER_<ENV>
MYSQL_PASSWORD: $MYSQL_PASSWORD_<ENV>
values.yaml contains a Ruby file which will consume those variables:
files:
config.rb: |
mysql['username'] = ENV["MYSQL_USER"]
mysql['password'] = ENV["MYSQL_PASSWORD"]
configmap.env.yaml defines:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "mychart.fullname" . }}-config-env
data:
{{- range $config_key, $config_value := .Values.envVars }}
{{ $config_key }}: {{ $config_value | quote | nindent 4 }}
{{- end }}
configmap.files.yaml defines:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "mychart.fullname" . }}-config-volume
data:
{{- range $file_path, $file_content := .Values.files }}
{{ $file_path }}: |
{{ $file_content | indent 4 -}}
{{- end }}
Finally, the deployment of the config map (only the config map part is shown here and I'm not using secrets here because this question is long enough as it is):
volumes:
- name: {{ include "mychart.fullname" . }}-config-volume
configMap:
name: {{ include "mychart.fullname" . }}-config-volume
containers:
- name: {{ .Chart.Name }}
volumeMounts:
- name: {{ include "mychart.fullname" . }}-config-volume
mountPath: /etc/my-config-dir
envFrom:
- configMapRef:
name: {{ include "mychart.fullname" . }}-config-env
So, in one sentence the workflow should be:
MYSQL_USER/PASSWORD_<ENV> saved into MYSQL_USER/PASSWORD, which are then written to /etc/my-config-dir/config.rb
I can't seem to make the environment variables of values.yaml (MYSQL_USER, MYSQL_PASSWORD) get the value of the project variables (MYSQL_USER_<ENV>, MYSQL_PASSWORD_<ENV>).
I use helm 3, but {{ env "MYSQL_USER_<ENV>" }} fails.
I could use string interpolation with the environment variable's name it the Ruby file, but then I would have to know what environment variables should be created for every container.
I'm trying to avoid having multiple --set arguments. Also I'm not sure how envsubst can be used here...
Any help will be greatly appreciated, thanks!
So eventually I used envsubst:
script:
- VALUES_FILE=values.${ENV}.yaml
- envsubst < ${VALUES_FILE}.env > ${VALUES_FILE}
- helm upgrade ... -f ${VALUES_FILE}

Pod to Pod communication within a Service

Is it possible to send a http Rest request to another K8 Pod that belongs to the same Service in Kubernetes?
E. G.
Service name = UserService , 2 Pods (replica = 2)
Pod 1 --> Pod 2 //using pod ip not load balanced hostname
Pod 2 --> Pod 1
The connection is over Rest GET 1.2.3.4:7079/user/1
The value for host + port is taken from kubectl get ep
Both of the pod IP's work successfully outside of the pods but when I do a kubectl exec -it into the pod and make the request via CURL, it returns a 404 not found for the endpoint.
Q What I would like to know if it is possible to make a request to another K8 Pod that is in the same Service?
Q Why am I able to get a successful ping 1.2.3.4, but not hit the Rest API?
below is my config files
#values.yml
replicaCount: 1
image:
repository: "docker.hosted/app"
tag: "0.1.0"
pullPolicy: Always
pullSecret: "a_secret"
service:
name: http
type: NodePort
externalPort: 7079
internalPort: 7079
ingress:
enabled: false
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "app.fullname" . }}
labels:
app: {{ template "app.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ template "app.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_PORT
value: "{{ .Values.service.internalPort }}"
ports:
- containerPort: {{ .Values.service.internalPort }}
livenessProbe:
httpGet:
path: /actuator/alive
port: {{ .Values.service.internalPort }}
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /actuator/ready
port: {{ .Values.service.internalPort }}
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
imagePullSecrets:
- name: {{ .Values.image.pullSecret }
service.yml
kind: Service
metadata:
name: {{ template "app.fullname" . }}
labels:
app: {{ template "app.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.externalPort }}
targetPort: {{ .Values.service.internalPort }}
protocol: TCP
name: {{ .Values.service.name }}
selector:
app: {{ template "app.name" . }}
release: {{ .Release.Name }}
executed from master
executed from inside a pod of the same MicroService
Is it possible to send a http Rest request to another K8 Pod that belongs to the same Service in Kubernetes?
For sure, yes, that's actually exactly why every Pod in the cluster has a cluster-wide routable address. You can programmatically ask kubernetes for the list of the Pod's "peers" by requesting the Endpoint object that is named the same as the Service, then subtract out your own Pod's IP address. It seems like you kind of knew that from kubectl get ep, but then you asked the question, so I thought I would be explicit that your experience wasn't an accident.
Q Why am I able to get a successful ping 1.2.3.4, but not hit the Rest API?
We can't help you troubleshoot your app without some app logs, but the fact that you got a 404 and not "connection refused" or 504 or such means your connectivity worked fine, it's just the app that is broken.
Yes as Mathew answered you can indeed communicate between pods in a Service with Kubernetes, the problem I was having was Istio was blocking the requests to each other.
Solution:
Disabling Istio injection solved this problem for me , I then enabled it after wards and load balancing continued, hopefully it can help some one in the future.
see answer here: Envoy Pod to Pod communication within a Service in K8

How to avoid translate some `{{` of the helm chart?

I want to put the following CRD into helm chart, but it contains go raw template. How to make helm not translate {{ and }} inside rawTemplate. Thanks for your response.
https://github.com/kubeflow/katib/blob/master/examples/random-example.yaml
apiVersion: "kubeflow.org/v1alpha1"
kind: StudyJob
metadata:
namespace: katib
labels:
controller-tools.k8s.io: "1.0"
name: random-example
spec:
studyName: random-example
owner: crd
optimizationtype: maximize
objectivevaluename: Validation-accuracy
optimizationgoal: 0.99
requestcount: 4
metricsnames:
- accuracy
workerSpec:
goTemplate:
rawTemplate: |-
apiVersion: batch/v1
kind: Job
metadata:
name: {{.WorkerId}}
namespace: katib
spec:
template:
spec:
containers:
- name: {{.WorkerId}}
image: katib/mxnet-mnist-example
command:
- "python"
- "/mxnet/example/image-classification/train_mnist.py"
- "--batch-size=64"
{{- with .HyperParameters}}
{{- range .}}
- "{{.Name}}={{.Value}}"
{{- end}}
{{- end}}
restartPolicy: Never
In the Go template language, the expression
{{ "{{" }}
will expand to two open curly braces, for cases when you need to use Go template syntax to generate documents in Go template syntax; for example
{{ "{{" }}- if .Values.foo }}
- name: FOO
value: {{ "{{" }} .Values.foo }}
{{ "{{" }}- end }}
(In a Kubernetes Helm context where you're using this syntax to generate YAML, be extra careful with how whitespace is handled; consider using helm template to dump out what gets generated.)

Resources