How can I serve a static file to my local Kubernetes deployed service from my controller file? - go

I have defined a deployment file:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ ... }}
labels:
app.kubernetes.io/name: {{ ... }}
helm.sh/chart: {{ ... }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
...
My service implements JWT validation and thus requires a public key. Can I somehow specify in the deployment file to serve a locally generated pub key file to my service?

You can do it with configmaps. Config maps are resources that are used to deploy single files (basically). I'm currently using one for my clusters nginx configuration.
In your config file , write the contents of your public key to your data field and then tell your deployment to use that config file and read from it. It's very similar to mounting a volume for a single file only. You may need to update your deployed image to read from the mounted location though.
Search for nginx in kubernetes for examples of how people use configmaps to deploy their configurations (in your case public key) to the clusters.
For testing you can create your config map with this command kubectl create configmap public-conf --from-file=./your-public-key. This will create a configmap called public-conf. You can run kubectl get configmap to see your newly created configmap.

I ended up using secrets suggested by #Crou to create the key:
$ kubectl create secret generic pub-key --from-file=./jwt-key.pub
and then mounted it to a volume in my deployment yaml:
spec:
volumes:
- name: secret
secret:
secretName: pub-key
defaultMode: 256
...
containers:
volumeMounts:
- name: secret
readOnly: true
mountPath: /secret
and was able to access my key at /secret/jwt-key.pub

Related

Spring Cloud Kubernetes is not loading secret keys with pattern like xx.yy

I am trying to learn about Spring Cloud Kubernetes for loading secrets and what I have observed is if a property has yml like structure, then it doesn't get loaded in app.
Ex:
kind: Secret
metadata:
name: activemq-secrets
labels:
broker: activemq
type: Opaque
data:
amqusername: bXl1c2VyCg==
amq.password: MWYyZDFlMmU2N2Rm
K8 Manifest
template:
spec:
volumes:
- name: secretvolume
secret:
secretName: activemq-secrets
containers:
-
volumeMounts:
- name: secretvolume
readOnly: true
mountPath: /etc/secrets/
jvm args:
-Dspring.cloud.kubernetes.secrets.paths=/etc/secrets/
-Dspring.cloud.kubernetes.secrets.enabled=true
Trying to load #Value("${amqusername}")works
But when I try to read this property with #Value("${amq.password}") I get error with placeholder not found. I have tried printing all spring configs and it doesn't show up. How can I fix this.
Try changing the variable name in the secret to amq_password
Update:
If you use environment variables rather than system properties, most operating systems disallow period-separated key names, but you can use underscores instead (e.g. SPRING_CONFIG_NAME instead of spring.config.name).
https://docs.spring.io/spring-boot/docs/1.5.6.RELEASE/reference/html/boot-features-external-config.html

Spring boot application not using k8 gke service account instead using a default service account

Problem Statement:
I have deployed a spring boot app in gke under a namespace
when the app starts it uses a default gce sa credentials to authenticate.
what i did is created a gke service account and used iam policy binding to bind with a google service account and added workload identity user role
then annotated the gke sa by executing below 2 commands
issue is still my spring boot uses default gce sa credentials
Can someOne Please help me in resolving this.
I can see serviceAccountName is changed to new gke k8 SA and secret is also getting created and mounted.But app deployed are not using this Gke SA
Note: I am using Helsm chart for deployment
gcloud iam service-accounts add-iam-policy-binding \
--member serviceAccount:{projectID}.svc.id.goog[default/{k8sServiceAccount}] \
--role roles/iam.workloadIdentityUser \
{googleServiceAccount}
kubectl annotate serviceaccount \
--namespace default \
{k8sServiceAccount} \
iam.gke.io/gcp-service-account={googleServiceAccount}
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: helloworld
appVersion: {{ .Values.appVersion }}
name: helloworld
spec:
replicas: 1
selector:
matchLabels:
app: helloworld
template:
metadata:
labels:
app: helloworld
environment: {{ .Values.environment }}
spec:
serviceAccountName: {{ .Values.serviceAccountName }}
containers:
- name: helloworld
image: {{ .Values.imageSha }}
imagePullPolicy: Always
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
ports:
- containerPort: 8080
env:
- name: SPRING_CONFIG_LOCATION
value: "/app/deployments/config/"
volumeMounts:
- name: application-config
mountPath: "/app/deployments/config"
readOnly: true
volumes:
- name: application-config
configMap:
name: {{ .Values.configMapName }}
items:
- key: application.properties
path: application.properties
When you create a pod, if you do not specify a service account, it is automatically assigned the default service account in the same namespace. If you get the raw json or yaml for a pod you have created (for example, kubectl get pods/<podname> -o yaml), you can see the spec.serviceAccountName field has been automatically set.
You can access the API from inside a pod using automatically mounted service account credentials, as described in Accessing the Cluster [1]. The API permissions of the service account depend on the authorization plugin and policy [2] in use.
In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: build-robot
automountServiceAccountToken: false
...
In version 1.6+, you can also opt out of automounting API credentials for a particular pod:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
...
The pod spec takes precedence over the service account if both specify a automountServiceAccountToken value.
[1] https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/
[2] https://kubernetes.io/docs/reference/access-authn-authz/authorization/#authorization-modules

How to populate application.properties file value from kubernetes Secrets mounted as file

I am working on Springboot and Kubernetes and I have really simple application that connects to Postgres database. I want to get the value of datasource from configmap and password from secrets as mount file.
Configmap file :
apiVersion: v1
kind: ConfigMap
metadata:
name: customer-config
data:
application.properties: |
server.forward-headers-strategy=framework
spring.datasource.url=jdbc:postgresql://test/customer
spring.datasource.username=postgres
Secrets File :
apiVersion: v1
kind: Secret
metadata:
name: secret-demo
data:
spring.datasource.password: cG9zdGdyZXM=
deployment file :
spec:
containers:
- name: customerc
image: localhost:8080/customer
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8282
volumeMounts:
- mountPath: /workspace/config/default
name: config-volume
- mountPath: /workspace/secret/default
name: secret-volume
volumes:
- name: config-volume
configMap:
name: customer-config
- name: secret-volume
secret:
secretName: secret-demo
items:
- key: spring.datasource.password
path: password
If I move spring.datasource.password prop from secret to configmap then it works fine or If I populate its value as env variable then also work fine.
But as we know both are not secure way to do so, can someone tell me what's wrong with file mounting for secrets.
Spring Boot 2.4 added support for importing a config tree. This support can be used to consume configuration from a volume mounted by Kubernetes.
As an example, let’s imagine that Kubernetes has mounted the following volume:
etc/
config/
myapp/
username
password
The contents of the username file would be a config value, and the contents of password would be a secret.
To import these properties, you can add the following to your application.properties file:
spring.config.import=optional:configtree:/etc/config/
This will result in the properties myapp.username and myapp.password being set . Their values will be the contents of /etc/config/myapp/username and /etc/config/myapp/password respectively.
By default, consuming secrets through the API is not enabled for security reasons.Spring Cloud Kubernetes requires access to Kubernetes API in order to be able to retrieve a list of addresses of pods running for a single service. The simplest way to do that when using Minikube is to create default ClusterRoleBinding with cluster-admin privilege.
Example on how to create one :-
$ kubectl create clusterrolebinding admin --clusterrole=cluster-admin --serviceaccount=default:default
You need to give secret type in manifest file. Hope it will work.
apiVersion: v1
kind: Secret
metadata:
name: secret-demo
type: Opaque
data:
spring.datasource.password: cG9zdGdyZXM

ConfigMap data (yml format) - Kubernetes

I have an application.yml (Spring) file, which has almost 70 fields, want to move those fields to ConfigMap.
In the process of setup ConfigMap, have realized all the 70 fields has be flatened example : webservice.endpoint.transferfund
It's gonna be a painful task to convert all the 70 fields as flat, is there any alternative.
Please suggest.
Below Config is working:
apiVersion: v1
kind: ConfigMap
metadata:
name: configmapname
namespace: default
data:
webservice.endpoint.transferfund: http://www.customer-service.app/api/tf
webservice.endpoint.getbalance: http://www.customer-service.app/api/balance
webservice.endpoint.customerinfo: http://www.customer-service.app/api/customerinfo
Below config is not working, tried it as yml format.
apiVersion: v1
kind: ConfigMap
metadata:
name: configmapname
namespace: default
data:
application.yaml: |-
webservice:
endpoint:
transferfund: http://www.customer-service.app/api/tf
getbalance: http://www.customer-service.app/api/balance
customerinfo: http://www.customer-service.app/api/customerinfo
in src/main/resources/application.yml have below fields to access ConfigMap keys:
webservice:
endpoint:
transferfund: ${webservice.endpoint.transferfund}
getbalance: ${webservice.endpoint.getbalance}
customerinfo: ${webservice.endpoint.customerinfo}
Updated:
ConfigMap Description:
C:\Users\deskktop>kubectl describe configmap configmapname
Name: configmapname
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
application.yaml:
----
webservice:
endpoint:
transferfund: http://www.customer-service.app/api/tf
getbalance: http://www.customer-service.app/api/balance
customerinfo: http://www.customer-service.app/api/customerinfo
Events: <none>
Deployment script: (configMapRef name provided as configmap name as shown above)
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: configmap-sample
spec:
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: configmap-sample
spec:
containers:
- name: configmap-sample
image: <<image>>
ports:
- name: http-api
containerPort: 9000
envFrom:
- configMapRef:
name: configmapname
resources:
limits:
memory: 1Gi
requests:
memory: 768Mi
env:
- name: JVM_OPTS
value: "-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -Xms768M"
A ConfigMap is a dictionary of configuration settings. It consists of key-value pairs of strings. Kubernetes then adds those values to your containers.
In your case you have to make them flat, because Kubernetes will not understand them.
You can read in the documentation about Creating ConfigMap that:
kubectl create configmap <map-name> <data-source>
where is the name you want to assign to the ConfigMap and is the directory, file, or literal value to draw the data from.
The data source corresponds to a key-value pair in the ConfigMap, where
key = the file name or the key you provided on the command line, and
value = the file contents or the literal value you provided on the command line.
You can use kubectl describe or kubectl get to retrieve information about a ConfigMap.
EDIT
You could create a ConfigMap from a file with defined key.
Define the key to use when creating a ConfigMap from a file
Syntax might look like this:
kubectl create configmap my_configmap --from-file=<my-key-name>=<path-to-file>
And the ConfigMap migh look like the following:
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2019-07-03T18:54:22Z
name: my_configmap
namespace: default
resourceVersion: "530"
selfLink: /api/v1/namespaces/default/configmaps/my_configmap
uid: 05f8da22-d671-11e5-8cd0-68f728db1985
data:
<my-key-name>: |
key=value
key=value
key=value
key=value
Also I was able to find Create Kubernetes ConfigMaps from configuration files.
Functionality
The projector can:
Take raw files and stuff them into a ConfigMap
Glob files in your config repo, and stuff ALL of them in your configmap
Extract fields from your structured data (yaml/json)
Create new structured outputs from a subset of a yaml/json source by pulling out some fields and dropping others
Translate back and forth between JSON and YAML (convert a YAML source to a JSON output, etc)
Support for extracting complex fields like objects+arrays from sources, and not just scalars!
You need to mount the ConfigMap as Volume. Otherwise the content would live in environment variables. The example i post here is from https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
restartPolicy: Never
You mentioned, you're using the application.yaml in context of a Spring project. So if you don't care whether you use .yaml or .property configuration-files, you can just use property-files because configMap generation supports them. It works with the --from-env-file flag:
kubectl create configmap configmapname --from-env-file application.properties
So in your deployment-file you can directly access the keys:
...
env:
- KEYNAME
valueFrom:
configMapKeyRef:
name: configmapname
key: KeyInPropertiesFile

Fail to connect to kubectl from client-go - /serviceaccount/token: no such file

I am using golang lib client-go to connect to a running local kubrenets. To start with I took code from the example: out-of-cluster-client-configuration.
Running a code like this:
$ KUBERNETES_SERVICE_HOST=localhost KUBERNETES_SERVICE_PORT=6443 go run ./main.go results in following error:
panic: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
goroutine 1 [running]:
/var/run/secrets/kubernetes.io/serviceaccount/
I am not quite sure which part of configuration I am missing. I've researched following links :
https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins
https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/
But with no luck.
I guess I need to either let the client-go know which token/serviceAccount to use, or configure kubectl in a way that everyone can connect to its api.
Here's status of my kubectl though some commands results:
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://localhost:6443
name: docker-for-desktop-cluster
contexts:
- context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop
current-context: docker-for-desktop
kind: Config
preferences: {}
users:
- name: docker-for-desktop
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
$ kubectl get serviceAccounts
NAME SECRETS AGE
default 1 3d
test-user 1 1d
$ kubectl describe serviceaccount test-user
Name: test-user
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: test-user-token-hxcsk
Tokens: test-user-token-hxcsk
Events: <none>
$ kubectl get secret test-user-token-hxcsk -o yaml
apiVersion: v1
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0......=
namespace: ZGVmYXVsdA==
token: ZXlKaGJHY2lPaUpTVXpJMU5pSX......=
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: test-user
kubernetes.io/service-account.uid: 984b359a-6bd3-11e8-8600-XXXXXXX
creationTimestamp: 2018-06-09T10:55:17Z
name: test-user-token-hxcsk
namespace: default
resourceVersion: "110618"
selfLink: /api/v1/namespaces/default/secrets/test-user-token-hxcsk
uid: 98550de5-6bd3-11e8-8600-XXXXXX
type: kubernetes.io/service-account-token
This answer could be a little outdated but I will try to give more perspective/baseline for future readers that encounter the same/similar problem.
TL;DR
The following error:
panic: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
is most likely connected with the lack of token in the /var/run/secrets/kubernetes.io/serviceaccount location when using in-cluster-client-configuration. Also, it could be related to the fact of using in-cluster-client-configuration code outside of the cluster (for example running this code directly on a laptop or in pure Docker container).
You can check following commands to troubleshoot your issue further (assuming this code is running inside a Pod):
$ kubectl get serviceaccount X -o yaml:
look for: automountServiceAccountToken: false
$ kubectl describe pod XYZ
look for: containers.mounts and volumeMounts where Secret is mounted
Citing the official documentation:
Authenticating inside the cluster
This example shows you how to configure a client with client-go to authenticate to the Kubernetes API from an application running inside the Kubernetes cluster.
client-go uses the Service Account token mounted inside the Pod at the /var/run/secrets/kubernetes.io/serviceaccount path when the rest.InClusterConfig() is used.
-- Github.com: Kubernetes: client-go: Examples: in cluster client configuration
If you are authenticating to the Kubernetes API with ~/.kube/config you should be using the out-of-cluster-client-configuration.
Additional information:
I've added additional information for more reference on further troubleshooting when the code is run inside of a Pod.
automountServiceAccountToken: false
In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: go-serviceaccount
automountServiceAccountToken: false
In version 1.6+, you can also opt out of automounting API credentials for a particular pod:
apiVersion: v1
kind: Pod
metadata:
name: sdk
spec:
serviceAccountName: go-serviceaccount
automountServiceAccountToken: false
-- Kubernetes.io: Docs: Tasks: Configure pod container: Configure service account
$ kubectl describe pod XYZ:
When the servicAccount token is mounted, the Pod definition should look like this:
<-- OMITTED -->
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from go-serviceaccount-token-4rst8 (ro)
<-- OMITTED -->
Volumes:
go-serviceaccount-token-4rst8:
Type: Secret (a volume populated by a Secret)
SecretName: go-serviceaccount-token-4rst8
Optional: false
If it's not:
<-- OMITTED -->
Mounts: <none>
<-- OMITTED -->
Volumes: <none>
Additional resources:
Kubernetes.io: Docs: Reference: Access authn authz: Authentication
Just to make it clear, in case it helps you further debug it: the problem has nothing to do with Go or your code, and everything to do with the Kubernetes node not being able to get a token from the Kubernetes master.
In kubectl config view, clusters.cluster.server should probably point at an IP address that the node can reach.
It needs to access the CA, i.e., the master, in order to provide that token, and I'm guessing it fails to for that reason.
kubectl describe <your_pod_name> would probably tell you what the problem was acquiring the token.
Since you assumed the problem was Go/your code and focused on that, you neglected to provide more information about your Kubernetes setup, which makes it more difficult for me to give you a better answer than my guess above ;-)
But I hope it helps!

Resources