How to include dynamic configmap and secrets in helmfile - helmfile

I am using helmfile to deploy multiple sub-charts using helmfile sync -e env command.
I am having the configmap and secretes which I need to load based on the environment.
Is there a way to load configmap and secretes based on the environment in the helmfile.
I tried to add in the
dev-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: backend
namespace: {{ .Values.namespace }}
data:
NODE_ENV: {{ .Values.env }}
In helmfile.yaml
environments:
envDefaults: &envDefaults
values:
- ./values/{{ .Environment.Name }}.yaml
- kubeContext: ''
namespace: '{{ .Environment.Name }}'
dev:
<<: *envDefaults
secrets:
- ./config/secrets/dev.yaml
- ./config/configmap/dev.yaml
Is there a way to import configmap and secrets (Not encrypted) YAML dynamically based on the environment in helmfile?

Related

What is the least privilege required for getting serviceaccounts including cluster-admin bound accounts?

I have a k8s cluster in minikube, configured a service account admin-user with the cluster-admin role, and am configuring the ServiceAccount below to use in my own application. Everything is applied in the same namespace and my spec is using the me serviceAccountName.
apiVersion: v1
kind: ServiceAccount
metadata:
name: me
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: myRole
rules:
- apiGroups: [""]
resources: ["serviceaccounts, secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: myRoleBinding
subjects:
- kind: ServiceAccount
name: me
namespace: namespace
roleRef:
kind: Role
name: myRole
apiGroup: rbac.authorization.k8s.io
My application uses the rest.InClusterConfig.
When executing client.CoreV1().ServiceAccounts("namespace").Get(ctx, "admin-user", meta.GetOptions{}) I get this error:
serviceaccounts "admin-user" is forbidden: User "system:serviceaccount:namespace:me" cannot get resource "serviceaccounts" in API group "" in the namespace "namespace"
When I bind the me ServiceAccount to the default view ClusterRole instead of myRole, my client call then works. From what I can tell, I am granting the same privileges necessary for serviceaccounts in myRole compared to view.
It seems I am not granting the correct privileges but I can't figure out what is necessary.
Typo in resources: ["serviceaccounts, secrets"].
Should be resources: ["serviceaccounts", "secrets"].

Make Laravel env variables visible to frontend via Kubernetes

i have an App built with Laravel, and one of the env variables i want to make them accessible via frontend.
As the official Laravel documentation indicates: https://laravel.com/docs/master/mix#environment-variables
If we prefix the specific env variables with the MIX_ prefix they will be available and accessible from JavaScript. Locally this works perfectly.
Now, the thing is i want to setup the env variables via Kubernetes configmap when deploying to staging and production.
Here is my config-map.yaml file
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "common.fullname" . }}-app-config
labels:
{{- include "common.labels" . | indent 4 }}
data:
.env: |+
EVENT_LOGGER_API={{ .Values.app.eventLoggerApi }}
MIX_EVENT_LOGGER_API="${EVENT_LOGGER_API}"
Now, my deployment.yaml file
volumes:
- name: app-config
configMap:
name: {{ template "common.fullname" . }}-app-config
volumeMounts:
- name: app-config
mountPath: /var/www/html/.env
subPath: .env
This env variables are visible in the backend Laravel, but cannot be accessed via JavaScript when running locally.
process.env.MIX_EVENT_LOGGER_API
Anyone had any experience before with setting this env variables via K8 configmap and them being accessible via JavaScript?
If you add variables after building your frontend, those variables will not be available
You need to include those variables in the job that does the build/push

Spring Active profile setup for existing application

Any help much appreciated , I have couple of spring boot application running in aks with default profile , i am trying to change the profile from my deployment.yaml using helm
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "helm-chart.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
helm.sh/chart: {{ include "helm-chart.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8080
protocol: TCP
env:
- name: SPRING_PROFILES_ACTIVE
value: "dev"
what i end up is my pod is been put to crashloopbackoff state saying
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2022-01-12 12:42:49.054 ERROR 1 --- [ main] o.s.b.d.LoggingFailureAnalysisReporter :
APPLICATION FAILED TO START
Description:
The Tomcat connector configured to listen on port 8207 failed to start. The port may already be in use or the connector may be misconfigured.
I tried to delete the existing pod and service for the application and did a fresh deploy i still get the same error ..
methods tried :(in all methods docker file is created , pod is created , application in pod is setup to dev profile but the thing is it not able to start the application with the above error , when i remove the profile setting , every thing is workly perfectly fine expect the fact is the application is set to default profile)
in docker file :
option a. CMD ["java","-jar","/app.jar", "--spring.profiles.active=dev"]
option b. CMD ["java","-jar","-Dspring.profiles.active=dev","/app.jar"]
changed in deployment.yml as mentioned above
ps : i dont have property file in my application on src/main/resources , i have only application-(env).yml files there .
The idea is to set the profile first and based on the profile the application_(env).yml has to be selected
output from helm
Release "app" has been upgraded. Happy Helming!
NAME: email-service
LAST DEPLOYED: Thu Jan 13 16:09:46 2022
NAMESPACE: default
STATUS: deployed
REVISION: 19
TEST SUITE: None
USER-SUPPLIED VALUES:
image:
repository: 957123096554.dkr.ecr.eu-central-1.amazonaws.com/app
service:
targetPort: 8207
COMPUTED VALUES:
image:
pullPolicy: Always
repository: 957123096554.dkr.ecr.eu-central-1.amazonaws.com/app-service
tag: latest
replicaCount: 1
service:
port: 80
targetPort: 8207
type: ClusterIP
Any help is appreciated , thanks
First of all, please check what profile the application is using, search for line like this (in log):
The following profiles are active: test
When I tested with Spring Boot v2.2.2.RELEASE, application_test.yml file is not used, it has to be renamed to application-test.yml, for a better highlighting of a difference:
application_test.yml # NOT working
application-test.yml # working as expected
What I like even more (but it is Spring Boot specific), you can use application.yml like this:
foo: 'foo default'
bar: 'bar default'
---
spring:
profiles:
- test
bar: 'bar test2'
Why I prefer this? Because you can use multiple profiles then, e.g. profile1,profile2 and it behaves as last wins, I mean it will override the values from profile1 with values from profile2, as it was defined in this order... The same does not work with application-profileName.yml approach.

Shell script in Kubernetes worker pod

I have a requirement to run a shell script from worker pod. For that I have created the configMap and loaded as volume. When I applied the configuration, I see a directory in the name of shell script is created is created. Can you help to understand why I see this behavior.
volumes:
- name: rhfeed
configMap:
name: rhfeed
defaultMode: 0777
volumeMounts:
- name: rhfeed
mountPath: /local_path/rh-feed/rh-generator.sh
subPath: rh-generator.sh
readOnly: true
drwxrwsrwx 2 root 1000 4096 Jun 22 06:55 rh-generator.sh
rh-generator.sh folder created because used have used subPath.
subPath creates a separate folder.
I have just performed a test with ConfigMap serving some static files for the nginx pod and my file was mounted correctly without overriding nor create deleting the content of the directory.
Here's the configMap yaml file:
apiVersion: v1
kind: ConfigMap
metadata:
name: static-files
data:
index.html: |
<html><head><title>File Mounted from ConfigMap</title></head>
<body><h1>Welcome</h1>
<p>Hi, This confirm that your yaml with subPath is working fine</p>
</body></html>
And here the pod yaml file:
➜ ~ cat pod-static-files.yaml
apiVersion: v1
kind: Pod
metadata:
name: website
spec:
volumes:
- name: static-files
configMap:
name: static-files
containers:
- name: nginx
image: nginx
volumeMounts:
- name: static-files
mountPath: /usr/share/nginx/index.html
subPath: index.html
As you can see below I has mounted the file into the directory without any issues:
➜ ~ keti website bash
root#website:~# ls /usr/share/nginx/
html index.html
And here is the content of the /usr/share/nginx/index.html file:
➜ root#website:~# cat /usr/share/nginx/index.html
<html><head><title>File Mounted from ConfigMap</title></head>
<body><h1>Welcomet</h1>
<p>Hi, This confirm that your yaml with subPath is working fine</p>
</body></html>

Spring boot log secret.yaml from helm

I am getting started with helm. I have defined the deployment, service, configMap and secret yaml files.
I have a simple spring boot application with basic http authentication, the username and password are defined in the secret file.
My application is correctly deployed, and when I tested it in the browser, it tells me that the username and password are wrong.
Is there a way to know what are the values that spring boot receives from helms?
Or is there a way to decrypt the secret.yaml file?
values.yaml
image:
repository: myrepo.azurecr.io
name: my-service
tag: latest
replicaCount: 1
users:
- name: "admin"
password: "admintest"
authority: "admin"
- name: "user-test"
password: "usertest"
authority: "user"
spring:
datasource:
url: someurl
username: someusername
password: somepassword
platform: postgresql
secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}-secret
stringData:
spring.datasource.url: "{{ .Values.spring.datasource.url }}"
spring.datasource.username: "{{ .Values.spring.datasource.username }}"
spring.datasource.password: "{{ .Values.spring.datasource.password }}"
spring.datasource.platform: "{{ .Values.spring.datasource.platform }}"
{{- range $idx, $user := .Values.users }}
users_{{ $idx }}_.name: "{{ $user.name }}"
users_{{ $idx }}_.password: "{{ printf $user.password }}"
users_{{ $idx }}_.authority: "{{ printf $user.authority }}"
{{- end }}
Normally the secret in the secret.yaml file won't be encrypted, just encoded in base64. So you could decode the content of the secret in tool like https://www.base64decode.org/ If you've got access to the kubernetes dashboard that also provides a way to see the value of the secret.
If you're injecting the secret as environment variables then you can find the pod with kubeclt get pods and then kubectl describe pod <pod_name> will include output of which environment variables are injected.
With helm I find it very useful to run helm install --dry-run --debug as then you can see in the console exactly what kubernetes resources will be created from the template for that install.

Resources