Spring Active profile setup for existing application - spring

Any help much appreciated , I have couple of spring boot application running in aks with default profile , i am trying to change the profile from my deployment.yaml using helm
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "helm-chart.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
helm.sh/chart: {{ include "helm-chart.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8080
protocol: TCP
env:
- name: SPRING_PROFILES_ACTIVE
value: "dev"
what i end up is my pod is been put to crashloopbackoff state saying
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2022-01-12 12:42:49.054 ERROR 1 --- [ main] o.s.b.d.LoggingFailureAnalysisReporter :
APPLICATION FAILED TO START
Description:
The Tomcat connector configured to listen on port 8207 failed to start. The port may already be in use or the connector may be misconfigured.
I tried to delete the existing pod and service for the application and did a fresh deploy i still get the same error ..
methods tried :(in all methods docker file is created , pod is created , application in pod is setup to dev profile but the thing is it not able to start the application with the above error , when i remove the profile setting , every thing is workly perfectly fine expect the fact is the application is set to default profile)
in docker file :
option a. CMD ["java","-jar","/app.jar", "--spring.profiles.active=dev"]
option b. CMD ["java","-jar","-Dspring.profiles.active=dev","/app.jar"]
changed in deployment.yml as mentioned above
ps : i dont have property file in my application on src/main/resources , i have only application-(env).yml files there .
The idea is to set the profile first and based on the profile the application_(env).yml has to be selected
output from helm
Release "app" has been upgraded. Happy Helming!
NAME: email-service
LAST DEPLOYED: Thu Jan 13 16:09:46 2022
NAMESPACE: default
STATUS: deployed
REVISION: 19
TEST SUITE: None
USER-SUPPLIED VALUES:
image:
repository: 957123096554.dkr.ecr.eu-central-1.amazonaws.com/app
service:
targetPort: 8207
COMPUTED VALUES:
image:
pullPolicy: Always
repository: 957123096554.dkr.ecr.eu-central-1.amazonaws.com/app-service
tag: latest
replicaCount: 1
service:
port: 80
targetPort: 8207
type: ClusterIP
Any help is appreciated , thanks

First of all, please check what profile the application is using, search for line like this (in log):
The following profiles are active: test
When I tested with Spring Boot v2.2.2.RELEASE, application_test.yml file is not used, it has to be renamed to application-test.yml, for a better highlighting of a difference:
application_test.yml # NOT working
application-test.yml # working as expected
What I like even more (but it is Spring Boot specific), you can use application.yml like this:
foo: 'foo default'
bar: 'bar default'
---
spring:
profiles:
- test
bar: 'bar test2'
Why I prefer this? Because you can use multiple profiles then, e.g. profile1,profile2 and it behaves as last wins, I mean it will override the values from profile1 with values from profile2, as it was defined in this order... The same does not work with application-profileName.yml approach.

Related

How to include dynamic configmap and secrets in helmfile

I am using helmfile to deploy multiple sub-charts using helmfile sync -e env command.
I am having the configmap and secretes which I need to load based on the environment.
Is there a way to load configmap and secretes based on the environment in the helmfile.
I tried to add in the
dev-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: backend
namespace: {{ .Values.namespace }}
data:
NODE_ENV: {{ .Values.env }}
In helmfile.yaml
environments:
envDefaults: &envDefaults
values:
- ./values/{{ .Environment.Name }}.yaml
- kubeContext: ''
namespace: '{{ .Environment.Name }}'
dev:
<<: *envDefaults
secrets:
- ./config/secrets/dev.yaml
- ./config/configmap/dev.yaml
Is there a way to import configmap and secrets (Not encrypted) YAML dynamically based on the environment in helmfile?

Make Laravel env variables visible to frontend via Kubernetes

i have an App built with Laravel, and one of the env variables i want to make them accessible via frontend.
As the official Laravel documentation indicates: https://laravel.com/docs/master/mix#environment-variables
If we prefix the specific env variables with the MIX_ prefix they will be available and accessible from JavaScript. Locally this works perfectly.
Now, the thing is i want to setup the env variables via Kubernetes configmap when deploying to staging and production.
Here is my config-map.yaml file
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "common.fullname" . }}-app-config
labels:
{{- include "common.labels" . | indent 4 }}
data:
.env: |+
EVENT_LOGGER_API={{ .Values.app.eventLoggerApi }}
MIX_EVENT_LOGGER_API="${EVENT_LOGGER_API}"
Now, my deployment.yaml file
volumes:
- name: app-config
configMap:
name: {{ template "common.fullname" . }}-app-config
volumeMounts:
- name: app-config
mountPath: /var/www/html/.env
subPath: .env
This env variables are visible in the backend Laravel, but cannot be accessed via JavaScript when running locally.
process.env.MIX_EVENT_LOGGER_API
Anyone had any experience before with setting this env variables via K8 configmap and them being accessible via JavaScript?
If you add variables after building your frontend, those variables will not be available
You need to include those variables in the job that does the build/push

Helm: Executing command on other containers in Job

I want to utilize Charts Hooks's post-install to do some action on my deployment's container.
For example, I have a php-fpm container that consists of a Laravel application, and I want to run php artisan key:gen on install. Since it's a one time command so I couldn't place it on the postStart lifecycle, otherwise it would keep overwriting the APP_KEY.
How can I use Charts Hooks to achieve it? Or is there a better way?
Your job needs to run a container that contains kubectl and you would execute this script to exec into another container. Since kubectl exec doesn't support selection by labels you need to retrieve the pod name beforehand:
$pod=$(kubectl get pods --no-headers -o custom-columns=":metadata.name" -l YOUR-LABELS=YOUR-VALUES)
kubectl exec $pod php artisan key:gen
If you think about the lifecycle of this key: if there are multiple pod replicas they need to agree on what the key is; and if you delete and recreate the pod, it needs to be using the same key it was using before. (A quick Google search comes up with some good descriptions of what this key is actually used for; if it's encrypting session cookies, for example, every copy of the pod really needs to agree.)
This suggests a setup where you generate the key once, store it in a Kubernetes Secret, and make it available to pods. Conveniently, "any variable in your .env file can be overridden by external environment variables", and you can set an environment variable from a secret value. There isn't a great way to make Helm generate the secret itself in a way that will be saved.
So, putting these pieces together: in your pod spec (inside your deployment spec) you need to get the environment variable from the secret.
env:
- name: APP_KEY
valueFrom:
secretKeyRef:
name: "{{ .Release.Name }}-{{ .Chart.Name }}"
key: app-key
Then you need to create a secret to hold the key.
apiVersion: v1
kind: Secret
metadata:
name: "{{ .Release.Name }}-{{ .Chart.Name }}"
data:
app-key: {{ printf "base64:%s" .Values.appKey | b64enc }}
And finally create the file holding the key. This should not be checked in as part of your chart.
echo "appKey: $(dd if=/dev/urandom bs=32 count=1 | base64)" > values-local.yaml
When you go to install your chart, use this values file
helm install ./charts/myapp -f values-local.yaml
There are a couple of other reasonable approaches that involve injecting the whole .env file as a ConfigMap or Secret, or extending your Docker image to generate this file on its own from values that get passed into it, or using an init container to generate the file before the main container starts. The point is that pods come and go, and need to be able to configure themselves when they start up; using kubectl exec in the way you're suggesting isn't great practice.
You can define a Job that will be run only once when Helm chart is installed:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "alpine:3.3"
command: ["/bin/sleep","{{default "10" .Values.sleepyTime}}"]
If you want to run job every time you upgrade chart - you can specify "post-upgrade" hook.
Read more here: https://github.com/helm/helm/blob/master/docs/charts_hooks.md

Can Kubernetes services deployed by helm configured to be restarted when manually deleting via kubectl?

I am trying to understand the nature of helm deployments in general. I have a deployment managed by helm which brings up a jdbc service using a service.yaml file.
Upon deployment, I can clearly see that the service is alive, in accordance to the service.yaml file.
It I manually delete the service, the service stays dead.
My question is: If I manually delete the service using kubectl delete, is the service supposed be restarted as the deployment is helm managed?
Is there any option to configure the service restart even on manual delete?
Is this the default and expected behaviour.
I have tried numerous options and scoured through the docs, I am unable to find the spec/option/config that causes the services to be restarted on delete unlike pods, which have a 'Always Restart' option.
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.exampleJDBCService.name }}
namespace: {{ .Release.Namespace }}
spec:
type: {{ .Values.exampleJDBCService.type }}
sessionAffinity: "{{ .Values.sessionAffinity.type }}"
{{- if (eq .Values.sessionAffinity.type "ClientIP") }}
sessionAffinityConfig:
clientIP:
timeoutSeconds: {{ .Values.sessionAffinity.timeoutSeconds }}
{{- end }}
selector:
{{ template "spark-example.fullname" . }}: "true"
ports:
- protocol: TCP
port: {{ .Values.exampleJDBCService.clusterNodePort }}
targetPort: {{ .Values.exampleJDBCService.targetPort }}
{{- if (and (eq .Values.exampleJDBCService.type "NodePort") (not (empty .Values.exampleJDBCService.clusterNodePort))) }}
nodePort: {{ .Values.exampleJDBCService.clusterNodePort }}
{{- end }}
You mix stuff a bit.
The RestartAlways that you define on a pod configures that it will always restart upon Completion or Failure.
The reason that you see the pod recreated upon deletion is that it has a deployment object that created it, and it has desires to always have the required pods amount.
Helm does not interact with the deletion of objects in the cluster, once he created his objects, he doesn't interact with them anymore until the next to helm command.
Hope that it help you understand the terms a bit better.
Deleted/corrupted Kubernetes resource objects (in your case Service) cannot be "restarted" automatically by tiller, but luckily can be restored to the desired state of configuration with following helm command:
helm upgrade <your-release-name> <repo-name>/<chart-name> --reuse-values --force
e.g.
helm upgrade my-ingress stable/nginx-ingress --reuse-values --force
You can also use:
helm history <release_name>
helm rollback --force [RELEASE] [REVISION]
--force argument in both cases, forces resource update through delete/recreate if needed

Spring boot log secret.yaml from helm

I am getting started with helm. I have defined the deployment, service, configMap and secret yaml files.
I have a simple spring boot application with basic http authentication, the username and password are defined in the secret file.
My application is correctly deployed, and when I tested it in the browser, it tells me that the username and password are wrong.
Is there a way to know what are the values that spring boot receives from helms?
Or is there a way to decrypt the secret.yaml file?
values.yaml
image:
repository: myrepo.azurecr.io
name: my-service
tag: latest
replicaCount: 1
users:
- name: "admin"
password: "admintest"
authority: "admin"
- name: "user-test"
password: "usertest"
authority: "user"
spring:
datasource:
url: someurl
username: someusername
password: somepassword
platform: postgresql
secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}-secret
stringData:
spring.datasource.url: "{{ .Values.spring.datasource.url }}"
spring.datasource.username: "{{ .Values.spring.datasource.username }}"
spring.datasource.password: "{{ .Values.spring.datasource.password }}"
spring.datasource.platform: "{{ .Values.spring.datasource.platform }}"
{{- range $idx, $user := .Values.users }}
users_{{ $idx }}_.name: "{{ $user.name }}"
users_{{ $idx }}_.password: "{{ printf $user.password }}"
users_{{ $idx }}_.authority: "{{ printf $user.authority }}"
{{- end }}
Normally the secret in the secret.yaml file won't be encrypted, just encoded in base64. So you could decode the content of the secret in tool like https://www.base64decode.org/ If you've got access to the kubernetes dashboard that also provides a way to see the value of the secret.
If you're injecting the secret as environment variables then you can find the pod with kubeclt get pods and then kubectl describe pod <pod_name> will include output of which environment variables are injected.
With helm I find it very useful to run helm install --dry-run --debug as then you can see in the console exactly what kubernetes resources will be created from the template for that install.

Resources