Hazelcast deployment on Kubernetes without Cluster Roles - spring-boot

I have spring boot application with embedded hazelcast that I am trying to deploy on a shared Kubernetes platform. I want to use kubernetes API strategy for auto discovery. Can I do this without creating Cluster Roles and Cluster Role Bindings and have just Role and Role Binding created under my namespace. If yes, what would the rbac.yaml look like ?
Tried creating the following roles and role bindings but no auto discovery so far.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: hazelcast-role
namespace: dev
rules:
- apiGroups:
- ""
resources:
- endpoints
- pods
- nodes
- services
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: hazelcast-role-binding
namespace: dev
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: hazelcast-role
subjects:
- kind: ServiceAccount
name: default
namespace: dev

Related

Fleet Server In Elastic Error : elastic-agent-cluster-leader is forbidden

We are setting up a fleet server in Kubernetes.
It has been given a CA and states its running but we cannot shell into it, and the logs are nothing but the following:
E0817 09:12:10.074969 927 leaderelection.go:330] error retrieving
resource lock default/elastic-agent-cluster-leader:
leases.coordination.k8s.io "elastic-agent-cluster-leader" is
forbidden: User "system:serviceaccount:default:elastic-agent" cannot
get resource "leases" in API group "coordination.k8s.io" in the
namespace "default"
I can find very little information on this ever happening let alone a resolution. Any information pointing to a possible resolution would be massively helpful!
You need to make sure that you have applied the ServiceAccount, ClusterRoles and ClusterRoleBindings from the setup files.
An example of these can be found in the quickstart documentation.
https://www.elastic.co/guide/en/cloud-on-k8s/2.2/k8s-elastic-agent-fleet-quickstart.html
Service Account
kind: ServiceAccount
metadata:
name: elastic-agent
namespace: default
Cluster Role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: elastic-agent
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- pods
- nodes
- namespaces
verbs:
- get
- watch
- list
- apiGroups: ["coordination.k8s.io"]
resources:
- leases
verbs:
- get
- create
- update
Cluster Role Binding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: elastic-agent
subjects:
- kind: ServiceAccount
name: elastic-agent
namespace: default
roleRef:
kind: ClusterRole
name: elastic-agent
apiGroup: rbac.authorization.k8s.io

Externalize configuration using ConfigMap with Spring Boot Kubernetes

I'm trying to externalize my Spring Boot configuration using ConfigMaps in Kubernetes. I've read the docs and added the dependency on my pom.xml:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-fabric8-config</artifactId>
<version>2.1.3</version>
</dependency>
Set my spring.application.name as webapp and created a ConfigMap from a YAML file:
spring:
web:
locale: en_US
locale-resolver: fixed
Using this command:
kubectl create configmap webapp \
--namespace webapp-production \
--from-file=config.yaml
But when my application starts I get the following error:
Can't read configMap with name: [webapp] in namespace: [webapp-production]. Ignoring.
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://IP/api/v1/namespaces/webapp-production/configmaps/webapp. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. configmaps "webapp" is forbidden: User "system:serviceaccount:webapp-production:default" cannot get resource "configmaps" in API group "" in the namespace "webapp-production".
I couldn't find any more info in the docs on how to configure access other than this:
You should check the security configuration section. To access config maps from inside a pod you need to have the correct Kubernetes service accounts, roles and role bindings.
How can I grant the required permissions?
Finally I got it solved by creating an specific ServiceAccount and setting the deployment template spec.serviceAccountName:
apiVersion: v1
kind: ServiceAccount
metadata:
name: webapp-service-account
namespace: webapp-production
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: webapp-cluster-role
namespace: webapp-production
# Grant access to configmaps for external configuration
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: webapp-cluster-role-binding
roleRef:
kind: ClusterRole
name: webapp-cluster-role
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: webapp-service-account
namespace: webapp-production

Kubernetes Dashboard - Internal error (500): Not enough data to create auth info structure

I have Kubernetes with ClusterRoles defined for my users and permissions by (RoleBindings) namespaces.
I want these users could be accessed into the Kubernetes Dashboard with custom perms. However, when they try to log in when using kubeconfig option that's got this message:
"Internal error (500): Not enough data to create auth info structure."
https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md -- This guide is only for creating ADMIN users, not users with custom perms or without privileges... (edited)
Update SOLVED:
You have to do this:
Create ServiceAccount per user
apiVersion: v1
kind: ServiceAccount
metadata:
name: NAME-user
namespace: kubernetes-dashboard
Adapt the RoleBinding adding this SA
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: PUT YOUR CR HERE
namespace: PUT YOUR NS HERE
subjects:
- kind: User
name: PUT YOUR CR HERE
apiGroup: 'rbac.authorization.k8s.io'
- kind: ServiceAccount
name: NAME-user
namespace: kubernetes-dashboard
roleRef:
kind: ClusterRole
name: PUT YOUR CR HERE
apiGroup: 'rbac.authorization.k8s.io'
Get the token:
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/NAME-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
Add token into your kubeconfig file. Your kb should be contain something like this:
apiVersion: v1
clusters:
- cluster:
server: https://XXXX
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: YOUR UER
name: kubernetes
current-context: "kubernetes"
kind: Config
preferences: {}
users:
- name: YOUR USER
user:
client-certificate-data: CODED
client-key-data: CODED
token: CODED ---> ADD TOKEN HERE
Login

Spring boot application not using k8 gke service account instead using a default service account

Problem Statement:
I have deployed a spring boot app in gke under a namespace
when the app starts it uses a default gce sa credentials to authenticate.
what i did is created a gke service account and used iam policy binding to bind with a google service account and added workload identity user role
then annotated the gke sa by executing below 2 commands
issue is still my spring boot uses default gce sa credentials
Can someOne Please help me in resolving this.
I can see serviceAccountName is changed to new gke k8 SA and secret is also getting created and mounted.But app deployed are not using this Gke SA
Note: I am using Helsm chart for deployment
gcloud iam service-accounts add-iam-policy-binding \
--member serviceAccount:{projectID}.svc.id.goog[default/{k8sServiceAccount}] \
--role roles/iam.workloadIdentityUser \
{googleServiceAccount}
kubectl annotate serviceaccount \
--namespace default \
{k8sServiceAccount} \
iam.gke.io/gcp-service-account={googleServiceAccount}
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: helloworld
appVersion: {{ .Values.appVersion }}
name: helloworld
spec:
replicas: 1
selector:
matchLabels:
app: helloworld
template:
metadata:
labels:
app: helloworld
environment: {{ .Values.environment }}
spec:
serviceAccountName: {{ .Values.serviceAccountName }}
containers:
- name: helloworld
image: {{ .Values.imageSha }}
imagePullPolicy: Always
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
ports:
- containerPort: 8080
env:
- name: SPRING_CONFIG_LOCATION
value: "/app/deployments/config/"
volumeMounts:
- name: application-config
mountPath: "/app/deployments/config"
readOnly: true
volumes:
- name: application-config
configMap:
name: {{ .Values.configMapName }}
items:
- key: application.properties
path: application.properties
When you create a pod, if you do not specify a service account, it is automatically assigned the default service account in the same namespace. If you get the raw json or yaml for a pod you have created (for example, kubectl get pods/<podname> -o yaml), you can see the spec.serviceAccountName field has been automatically set.
You can access the API from inside a pod using automatically mounted service account credentials, as described in Accessing the Cluster [1]. The API permissions of the service account depend on the authorization plugin and policy [2] in use.
In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: build-robot
automountServiceAccountToken: false
...
In version 1.6+, you can also opt out of automounting API credentials for a particular pod:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
...
The pod spec takes precedence over the service account if both specify a automountServiceAccountToken value.
[1] https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/
[2] https://kubernetes.io/docs/reference/access-authn-authz/authorization/#authorization-modules

Access Kubernetes Dashboard on EC2 Remotely

I setup a K8s cluster in EC2 and launched kubernetes dashboard by following these links:
https://github.com/kubernetes/dashboard
https://github.com/kubernetes/dashboard/wiki/Access-control
Here are commands I ran:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
As I setup a few IPs into security group for the EC2 instances, I assume only those IPs can access the dashboard, so no worry about the security here.
When I try to access the dashboard using:
http://<My_IP>:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Now what's the easiest way to access the dashboard?
I noticed there are several related questions, but seems no one really answer it.
Thanks a lot.
P.S. Just noticed some errors in Dashboard log. Something wrong with dashboard running?
You can use service with type:Loadbalancer and use loadBalancerSourceRanges: to limits access to your dashboard.
Have you done the ClusterRoleBinding for serviceaccount kubernetes-dashboard.? If not, apply the below yaml file changes, so that the serviceaccount will get cluster-admin roles to access all kubernetes resources.
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
It depends on what ServiceAccount or User you are using to connect to the kube-apiserver. If you want to have access without look for details of policy, literally give access to everything, your RBAC file can look similar to this:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: my-cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: <your-user-from-your-~/.kube/config>
Than pass a command:
kubectl apply -f <filename>
Second approach:
kubectl create clusterrolebinding my-cluster-admin --clusterrole=cluster-admin --user=<your-user-from-your-~/.kube/config>
You can also use a Group or ServiceAccount in User field. Look for official documentation about RBAC Authorization here.
Also what I found is great tutorial if you wanna take it step by step.

Resources