Increment heap memory in a container with Kubernetes - spring

I'm facing with a issue in my spring boot service. After deploy it on kubernetes I have a Java Heap Space. I have set the next environment configuration on my deployment.yaml:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: JAVA_OPTS
value: "-Xms512M -Xmx512M -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1"
But after make a new deployment I'm still having the same issue because this way hasn't had effect. I have gone inside of my image and I have seen that the heap memory is around 200MB.
My image is working under openjdk 11. Any idea about why is not working correctly?
Thank you.

I'm guessing you just set that env variable, which does nothing, you need to add it to your RUN/ENTRYPOINT java command in your Dockerfile or as args for your kubernetes command.
Most likely something like
ENTRYPOINT java $JAVA_OPTS -jar <path to your jar>
would work

Related

Spring Active profile setup for existing application

Any help much appreciated , I have couple of spring boot application running in aks with default profile , i am trying to change the profile from my deployment.yaml using helm
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "helm-chart.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
helm.sh/chart: {{ include "helm-chart.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8080
protocol: TCP
env:
- name: SPRING_PROFILES_ACTIVE
value: "dev"
what i end up is my pod is been put to crashloopbackoff state saying
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2022-01-12 12:42:49.054 ERROR 1 --- [ main] o.s.b.d.LoggingFailureAnalysisReporter :
APPLICATION FAILED TO START
Description:
The Tomcat connector configured to listen on port 8207 failed to start. The port may already be in use or the connector may be misconfigured.
I tried to delete the existing pod and service for the application and did a fresh deploy i still get the same error ..
methods tried :(in all methods docker file is created , pod is created , application in pod is setup to dev profile but the thing is it not able to start the application with the above error , when i remove the profile setting , every thing is workly perfectly fine expect the fact is the application is set to default profile)
in docker file :
option a. CMD ["java","-jar","/app.jar", "--spring.profiles.active=dev"]
option b. CMD ["java","-jar","-Dspring.profiles.active=dev","/app.jar"]
changed in deployment.yml as mentioned above
ps : i dont have property file in my application on src/main/resources , i have only application-(env).yml files there .
The idea is to set the profile first and based on the profile the application_(env).yml has to be selected
output from helm
Release "app" has been upgraded. Happy Helming!
NAME: email-service
LAST DEPLOYED: Thu Jan 13 16:09:46 2022
NAMESPACE: default
STATUS: deployed
REVISION: 19
TEST SUITE: None
USER-SUPPLIED VALUES:
image:
repository: 957123096554.dkr.ecr.eu-central-1.amazonaws.com/app
service:
targetPort: 8207
COMPUTED VALUES:
image:
pullPolicy: Always
repository: 957123096554.dkr.ecr.eu-central-1.amazonaws.com/app-service
tag: latest
replicaCount: 1
service:
port: 80
targetPort: 8207
type: ClusterIP
Any help is appreciated , thanks
First of all, please check what profile the application is using, search for line like this (in log):
The following profiles are active: test
When I tested with Spring Boot v2.2.2.RELEASE, application_test.yml file is not used, it has to be renamed to application-test.yml, for a better highlighting of a difference:
application_test.yml # NOT working
application-test.yml # working as expected
What I like even more (but it is Spring Boot specific), you can use application.yml like this:
foo: 'foo default'
bar: 'bar default'
---
spring:
profiles:
- test
bar: 'bar test2'
Why I prefer this? Because you can use multiple profiles then, e.g. profile1,profile2 and it behaves as last wins, I mean it will override the values from profile1 with values from profile2, as it was defined in this order... The same does not work with application-profileName.yml approach.

Helm: Executing command on other containers in Job

I want to utilize Charts Hooks's post-install to do some action on my deployment's container.
For example, I have a php-fpm container that consists of a Laravel application, and I want to run php artisan key:gen on install. Since it's a one time command so I couldn't place it on the postStart lifecycle, otherwise it would keep overwriting the APP_KEY.
How can I use Charts Hooks to achieve it? Or is there a better way?
Your job needs to run a container that contains kubectl and you would execute this script to exec into another container. Since kubectl exec doesn't support selection by labels you need to retrieve the pod name beforehand:
$pod=$(kubectl get pods --no-headers -o custom-columns=":metadata.name" -l YOUR-LABELS=YOUR-VALUES)
kubectl exec $pod php artisan key:gen
If you think about the lifecycle of this key: if there are multiple pod replicas they need to agree on what the key is; and if you delete and recreate the pod, it needs to be using the same key it was using before. (A quick Google search comes up with some good descriptions of what this key is actually used for; if it's encrypting session cookies, for example, every copy of the pod really needs to agree.)
This suggests a setup where you generate the key once, store it in a Kubernetes Secret, and make it available to pods. Conveniently, "any variable in your .env file can be overridden by external environment variables", and you can set an environment variable from a secret value. There isn't a great way to make Helm generate the secret itself in a way that will be saved.
So, putting these pieces together: in your pod spec (inside your deployment spec) you need to get the environment variable from the secret.
env:
- name: APP_KEY
valueFrom:
secretKeyRef:
name: "{{ .Release.Name }}-{{ .Chart.Name }}"
key: app-key
Then you need to create a secret to hold the key.
apiVersion: v1
kind: Secret
metadata:
name: "{{ .Release.Name }}-{{ .Chart.Name }}"
data:
app-key: {{ printf "base64:%s" .Values.appKey | b64enc }}
And finally create the file holding the key. This should not be checked in as part of your chart.
echo "appKey: $(dd if=/dev/urandom bs=32 count=1 | base64)" > values-local.yaml
When you go to install your chart, use this values file
helm install ./charts/myapp -f values-local.yaml
There are a couple of other reasonable approaches that involve injecting the whole .env file as a ConfigMap or Secret, or extending your Docker image to generate this file on its own from values that get passed into it, or using an init container to generate the file before the main container starts. The point is that pods come and go, and need to be able to configure themselves when they start up; using kubectl exec in the way you're suggesting isn't great practice.
You can define a Job that will be run only once when Helm chart is installed:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "alpine:3.3"
command: ["/bin/sleep","{{default "10" .Values.sleepyTime}}"]
If you want to run job every time you upgrade chart - you can specify "post-upgrade" hook.
Read more here: https://github.com/helm/helm/blob/master/docs/charts_hooks.md

Can Kubernetes services deployed by helm configured to be restarted when manually deleting via kubectl?

I am trying to understand the nature of helm deployments in general. I have a deployment managed by helm which brings up a jdbc service using a service.yaml file.
Upon deployment, I can clearly see that the service is alive, in accordance to the service.yaml file.
It I manually delete the service, the service stays dead.
My question is: If I manually delete the service using kubectl delete, is the service supposed be restarted as the deployment is helm managed?
Is there any option to configure the service restart even on manual delete?
Is this the default and expected behaviour.
I have tried numerous options and scoured through the docs, I am unable to find the spec/option/config that causes the services to be restarted on delete unlike pods, which have a 'Always Restart' option.
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.exampleJDBCService.name }}
namespace: {{ .Release.Namespace }}
spec:
type: {{ .Values.exampleJDBCService.type }}
sessionAffinity: "{{ .Values.sessionAffinity.type }}"
{{- if (eq .Values.sessionAffinity.type "ClientIP") }}
sessionAffinityConfig:
clientIP:
timeoutSeconds: {{ .Values.sessionAffinity.timeoutSeconds }}
{{- end }}
selector:
{{ template "spark-example.fullname" . }}: "true"
ports:
- protocol: TCP
port: {{ .Values.exampleJDBCService.clusterNodePort }}
targetPort: {{ .Values.exampleJDBCService.targetPort }}
{{- if (and (eq .Values.exampleJDBCService.type "NodePort") (not (empty .Values.exampleJDBCService.clusterNodePort))) }}
nodePort: {{ .Values.exampleJDBCService.clusterNodePort }}
{{- end }}
You mix stuff a bit.
The RestartAlways that you define on a pod configures that it will always restart upon Completion or Failure.
The reason that you see the pod recreated upon deletion is that it has a deployment object that created it, and it has desires to always have the required pods amount.
Helm does not interact with the deletion of objects in the cluster, once he created his objects, he doesn't interact with them anymore until the next to helm command.
Hope that it help you understand the terms a bit better.
Deleted/corrupted Kubernetes resource objects (in your case Service) cannot be "restarted" automatically by tiller, but luckily can be restored to the desired state of configuration with following helm command:
helm upgrade <your-release-name> <repo-name>/<chart-name> --reuse-values --force
e.g.
helm upgrade my-ingress stable/nginx-ingress --reuse-values --force
You can also use:
helm history <release_name>
helm rollback --force [RELEASE] [REVISION]
--force argument in both cases, forces resource update through delete/recreate if needed

Pass service/init.d name to Ansible handler as variable

New to Ansible and was considering the following creation of services on the fly and how best to manage this. I have described below how this doesn't work, but hopefully its enough to describe the problem.
Any pointers appreciated. Thanks.
I'm using a template file to deploy a bunch of near identical application servers. During the deployment of the application servers, a corresponding init script is placed using the variable:`
/etc/init.d/{{ application_instance }}`
Next I'd like to enable and ensure its started:
name: be sure app_xyz is running and enabled
service: name={{ application_instance }} state=started enabled=yes
Further on I'd like to call a restart of the the application when configuration files are updated:
- name: be sure app_xyz is configured
template: src=xyz.conf dest=/opt/application/{{ application_server }}.conf
notify:
- restart {{ application_server }}
With the handler looking like this:
- name: restart {{ application_server }}
service: name={{ application_server }} state=restarted
You don't need a dynamical handler name for it. What about a static handler name:
# handler
- name: restart application server
service: name={{ application_server }} state=restarted
# task
- name: be sure app_xyz is configured
template: src=xyz.conf dest=/opt/application/{{ application_server }}.conf
notify:
- restart application server
service: name={{ application_server }} state=restarted

How to iterate through a list variable to fill out an Ansible task's options?

Say I'm starting a Docker Container, and have a list of apps, including port information like so:
my_apps:
- name: App1
ports:
- "2000:2000"
- name: App2
ports:
- "2001:2001"
In the following task, would there be an easy way to extract all the ports from the above variable, for all apps, into the ports option below?
- My Docker Container
docker_container:
name: ubunty
image: ubuntu
ports:
- "2000:2000"
- "2001:2001"
Currently, I have another list going for all the ports, but in order to add another port, I have to add it to both lists, which becomes cumbersome. Was hoping there would be another way.
You can do something like:
- vars:
list_of_ports:
- 2000:2000
- 2001:2001
...and then in your play:
- name: App1
ports: {{ list_of_ports }}
- name: App2
ports: {{ list_of_ports }}
The above may not be perfectly syntactically correct, but it's close enough to give you the idea.
You can use set_fact to create a new variable and add the ports value to each. Then use that to call your container.
tasks:
- name: Initialize ports
set_fact: ports=[]
- name: Collect ports from apps
set_fact: ports="{{ports}} + {{item.ports}}"
with_items: "{{ my_apps }}"
Then call your container with the ports variable
- My Docker Container
docker_container:
name: ubunty
image: ubuntu
ports: "{{ ports }}"
Here is a solution with jinja filters to reduce your list of apps into a list of ports:
my_apps | map(attribute='ports') | list
or in your task:
- My Docker Container
docker_container:
name: ubunty
image: ubuntu
ports: "{{ my_apps | map(attribute='ports') | list }}"

Resources