Externalize configuration using ConfigMap with Spring Boot Kubernetes - spring-boot

I'm trying to externalize my Spring Boot configuration using ConfigMaps in Kubernetes. I've read the docs and added the dependency on my pom.xml:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-fabric8-config</artifactId>
<version>2.1.3</version>
</dependency>
Set my spring.application.name as webapp and created a ConfigMap from a YAML file:
spring:
web:
locale: en_US
locale-resolver: fixed
Using this command:
kubectl create configmap webapp \
--namespace webapp-production \
--from-file=config.yaml
But when my application starts I get the following error:
Can't read configMap with name: [webapp] in namespace: [webapp-production]. Ignoring.
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://IP/api/v1/namespaces/webapp-production/configmaps/webapp. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. configmaps "webapp" is forbidden: User "system:serviceaccount:webapp-production:default" cannot get resource "configmaps" in API group "" in the namespace "webapp-production".
I couldn't find any more info in the docs on how to configure access other than this:
You should check the security configuration section. To access config maps from inside a pod you need to have the correct Kubernetes service accounts, roles and role bindings.
How can I grant the required permissions?

Finally I got it solved by creating an specific ServiceAccount and setting the deployment template spec.serviceAccountName:
apiVersion: v1
kind: ServiceAccount
metadata:
name: webapp-service-account
namespace: webapp-production
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: webapp-cluster-role
namespace: webapp-production
# Grant access to configmaps for external configuration
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: webapp-cluster-role-binding
roleRef:
kind: ClusterRole
name: webapp-cluster-role
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: webapp-service-account
namespace: webapp-production

Related

Hazelcast deployment on Kubernetes without Cluster Roles

I have spring boot application with embedded hazelcast that I am trying to deploy on a shared Kubernetes platform. I want to use kubernetes API strategy for auto discovery. Can I do this without creating Cluster Roles and Cluster Role Bindings and have just Role and Role Binding created under my namespace. If yes, what would the rbac.yaml look like ?
Tried creating the following roles and role bindings but no auto discovery so far.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: hazelcast-role
namespace: dev
rules:
- apiGroups:
- ""
resources:
- endpoints
- pods
- nodes
- services
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: hazelcast-role-binding
namespace: dev
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: hazelcast-role
subjects:
- kind: ServiceAccount
name: default
namespace: dev

Fleet Server In Elastic Error : elastic-agent-cluster-leader is forbidden

We are setting up a fleet server in Kubernetes.
It has been given a CA and states its running but we cannot shell into it, and the logs are nothing but the following:
E0817 09:12:10.074969 927 leaderelection.go:330] error retrieving
resource lock default/elastic-agent-cluster-leader:
leases.coordination.k8s.io "elastic-agent-cluster-leader" is
forbidden: User "system:serviceaccount:default:elastic-agent" cannot
get resource "leases" in API group "coordination.k8s.io" in the
namespace "default"
I can find very little information on this ever happening let alone a resolution. Any information pointing to a possible resolution would be massively helpful!
You need to make sure that you have applied the ServiceAccount, ClusterRoles and ClusterRoleBindings from the setup files.
An example of these can be found in the quickstart documentation.
https://www.elastic.co/guide/en/cloud-on-k8s/2.2/k8s-elastic-agent-fleet-quickstart.html
Service Account
kind: ServiceAccount
metadata:
name: elastic-agent
namespace: default
Cluster Role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: elastic-agent
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- pods
- nodes
- namespaces
verbs:
- get
- watch
- list
- apiGroups: ["coordination.k8s.io"]
resources:
- leases
verbs:
- get
- create
- update
Cluster Role Binding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: elastic-agent
subjects:
- kind: ServiceAccount
name: elastic-agent
namespace: default
roleRef:
kind: ClusterRole
name: elastic-agent
apiGroup: rbac.authorization.k8s.io

Kubernetes Dashboard - Internal error (500): Not enough data to create auth info structure

I have Kubernetes with ClusterRoles defined for my users and permissions by (RoleBindings) namespaces.
I want these users could be accessed into the Kubernetes Dashboard with custom perms. However, when they try to log in when using kubeconfig option that's got this message:
"Internal error (500): Not enough data to create auth info structure."
https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md -- This guide is only for creating ADMIN users, not users with custom perms or without privileges... (edited)
Update SOLVED:
You have to do this:
Create ServiceAccount per user
apiVersion: v1
kind: ServiceAccount
metadata:
name: NAME-user
namespace: kubernetes-dashboard
Adapt the RoleBinding adding this SA
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: PUT YOUR CR HERE
namespace: PUT YOUR NS HERE
subjects:
- kind: User
name: PUT YOUR CR HERE
apiGroup: 'rbac.authorization.k8s.io'
- kind: ServiceAccount
name: NAME-user
namespace: kubernetes-dashboard
roleRef:
kind: ClusterRole
name: PUT YOUR CR HERE
apiGroup: 'rbac.authorization.k8s.io'
Get the token:
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/NAME-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
Add token into your kubeconfig file. Your kb should be contain something like this:
apiVersion: v1
clusters:
- cluster:
server: https://XXXX
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: YOUR UER
name: kubernetes
current-context: "kubernetes"
kind: Config
preferences: {}
users:
- name: YOUR USER
user:
client-certificate-data: CODED
client-key-data: CODED
token: CODED ---> ADD TOKEN HERE
Login

Access Kubernetes Dashboard on EC2 Remotely

I setup a K8s cluster in EC2 and launched kubernetes dashboard by following these links:
https://github.com/kubernetes/dashboard
https://github.com/kubernetes/dashboard/wiki/Access-control
Here are commands I ran:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
As I setup a few IPs into security group for the EC2 instances, I assume only those IPs can access the dashboard, so no worry about the security here.
When I try to access the dashboard using:
http://<My_IP>:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Now what's the easiest way to access the dashboard?
I noticed there are several related questions, but seems no one really answer it.
Thanks a lot.
P.S. Just noticed some errors in Dashboard log. Something wrong with dashboard running?
You can use service with type:Loadbalancer and use loadBalancerSourceRanges: to limits access to your dashboard.
Have you done the ClusterRoleBinding for serviceaccount kubernetes-dashboard.? If not, apply the below yaml file changes, so that the serviceaccount will get cluster-admin roles to access all kubernetes resources.
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
It depends on what ServiceAccount or User you are using to connect to the kube-apiserver. If you want to have access without look for details of policy, literally give access to everything, your RBAC file can look similar to this:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: my-cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: <your-user-from-your-~/.kube/config>
Than pass a command:
kubectl apply -f <filename>
Second approach:
kubectl create clusterrolebinding my-cluster-admin --clusterrole=cluster-admin --user=<your-user-from-your-~/.kube/config>
You can also use a Group or ServiceAccount in User field. Look for official documentation about RBAC Authorization here.
Also what I found is great tutorial if you wanna take it step by step.

Kuberenetes 403: Cannot patch pods in the namespace

While trying to deploy a pod that utilizes the go-micro framework I received the following error:
2018/12/27 23:04:51 K8s: request failed with code 403
2018/12/27 23:04:51 K8s: request failed with body:
2018/12/27 23:04:51 {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"user-5676b5696-jspp5\" is forbidden: User \"system:serviceaccount:default:default\" cannot patch pods in the namespace \"default\"","reason":"Forbidden","details":{"name":"user-5676b5696-jspp5","kind":"pods"},"code":403}
2018/12/27 23:04:51 K8s: error
It seems like go-micro doesn't have the necessary permissions to patch pods from within a pod.
The issue was resolved with creating a cluster role binding that enables the correct permissions
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: micro-rbac
subjects:
- kind: ServiceAccount
# Reference to upper's `metadata.name`
name: default
# Reference to upper's `metadata.namespace`
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io

Resources