While trying to deploy a pod that utilizes the go-micro framework I received the following error:
2018/12/27 23:04:51 K8s: request failed with code 403
2018/12/27 23:04:51 K8s: request failed with body:
2018/12/27 23:04:51 {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"user-5676b5696-jspp5\" is forbidden: User \"system:serviceaccount:default:default\" cannot patch pods in the namespace \"default\"","reason":"Forbidden","details":{"name":"user-5676b5696-jspp5","kind":"pods"},"code":403}
2018/12/27 23:04:51 K8s: error
It seems like go-micro doesn't have the necessary permissions to patch pods from within a pod.
The issue was resolved with creating a cluster role binding that enables the correct permissions
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: micro-rbac
subjects:
- kind: ServiceAccount
# Reference to upper's `metadata.name`
name: default
# Reference to upper's `metadata.namespace`
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
Related
I have spring boot application with embedded hazelcast that I am trying to deploy on a shared Kubernetes platform. I want to use kubernetes API strategy for auto discovery. Can I do this without creating Cluster Roles and Cluster Role Bindings and have just Role and Role Binding created under my namespace. If yes, what would the rbac.yaml look like ?
Tried creating the following roles and role bindings but no auto discovery so far.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: hazelcast-role
namespace: dev
rules:
- apiGroups:
- ""
resources:
- endpoints
- pods
- nodes
- services
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: hazelcast-role-binding
namespace: dev
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: hazelcast-role
subjects:
- kind: ServiceAccount
name: default
namespace: dev
We are setting up a fleet server in Kubernetes.
It has been given a CA and states its running but we cannot shell into it, and the logs are nothing but the following:
E0817 09:12:10.074969 927 leaderelection.go:330] error retrieving
resource lock default/elastic-agent-cluster-leader:
leases.coordination.k8s.io "elastic-agent-cluster-leader" is
forbidden: User "system:serviceaccount:default:elastic-agent" cannot
get resource "leases" in API group "coordination.k8s.io" in the
namespace "default"
I can find very little information on this ever happening let alone a resolution. Any information pointing to a possible resolution would be massively helpful!
You need to make sure that you have applied the ServiceAccount, ClusterRoles and ClusterRoleBindings from the setup files.
An example of these can be found in the quickstart documentation.
https://www.elastic.co/guide/en/cloud-on-k8s/2.2/k8s-elastic-agent-fleet-quickstart.html
Service Account
kind: ServiceAccount
metadata:
name: elastic-agent
namespace: default
Cluster Role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: elastic-agent
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- pods
- nodes
- namespaces
verbs:
- get
- watch
- list
- apiGroups: ["coordination.k8s.io"]
resources:
- leases
verbs:
- get
- create
- update
Cluster Role Binding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: elastic-agent
subjects:
- kind: ServiceAccount
name: elastic-agent
namespace: default
roleRef:
kind: ClusterRole
name: elastic-agent
apiGroup: rbac.authorization.k8s.io
I'm trying to externalize my Spring Boot configuration using ConfigMaps in Kubernetes. I've read the docs and added the dependency on my pom.xml:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-fabric8-config</artifactId>
<version>2.1.3</version>
</dependency>
Set my spring.application.name as webapp and created a ConfigMap from a YAML file:
spring:
web:
locale: en_US
locale-resolver: fixed
Using this command:
kubectl create configmap webapp \
--namespace webapp-production \
--from-file=config.yaml
But when my application starts I get the following error:
Can't read configMap with name: [webapp] in namespace: [webapp-production]. Ignoring.
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://IP/api/v1/namespaces/webapp-production/configmaps/webapp. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. configmaps "webapp" is forbidden: User "system:serviceaccount:webapp-production:default" cannot get resource "configmaps" in API group "" in the namespace "webapp-production".
I couldn't find any more info in the docs on how to configure access other than this:
You should check the security configuration section. To access config maps from inside a pod you need to have the correct Kubernetes service accounts, roles and role bindings.
How can I grant the required permissions?
Finally I got it solved by creating an specific ServiceAccount and setting the deployment template spec.serviceAccountName:
apiVersion: v1
kind: ServiceAccount
metadata:
name: webapp-service-account
namespace: webapp-production
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: webapp-cluster-role
namespace: webapp-production
# Grant access to configmaps for external configuration
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: webapp-cluster-role-binding
roleRef:
kind: ClusterRole
name: webapp-cluster-role
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: webapp-service-account
namespace: webapp-production
I have Kubernetes with ClusterRoles defined for my users and permissions by (RoleBindings) namespaces.
I want these users could be accessed into the Kubernetes Dashboard with custom perms. However, when they try to log in when using kubeconfig option that's got this message:
"Internal error (500): Not enough data to create auth info structure."
https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md -- This guide is only for creating ADMIN users, not users with custom perms or without privileges... (edited)
Update SOLVED:
You have to do this:
Create ServiceAccount per user
apiVersion: v1
kind: ServiceAccount
metadata:
name: NAME-user
namespace: kubernetes-dashboard
Adapt the RoleBinding adding this SA
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: PUT YOUR CR HERE
namespace: PUT YOUR NS HERE
subjects:
- kind: User
name: PUT YOUR CR HERE
apiGroup: 'rbac.authorization.k8s.io'
- kind: ServiceAccount
name: NAME-user
namespace: kubernetes-dashboard
roleRef:
kind: ClusterRole
name: PUT YOUR CR HERE
apiGroup: 'rbac.authorization.k8s.io'
Get the token:
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/NAME-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
Add token into your kubeconfig file. Your kb should be contain something like this:
apiVersion: v1
clusters:
- cluster:
server: https://XXXX
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: YOUR UER
name: kubernetes
current-context: "kubernetes"
kind: Config
preferences: {}
users:
- name: YOUR USER
user:
client-certificate-data: CODED
client-key-data: CODED
token: CODED ---> ADD TOKEN HERE
Login
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
When doing helm install -f values.yaml xxx-xxx-Agent xxxx-repo/xxx-agent --namespace xxxxx-dev
getting below error
'''
Error: rendered manifests contain a resource that already exists. Unable to continue with install: could not get information about the resource: secrets "azpsecretxxx" is forbidden: User "xxxxxxxxxxxx#xxxxx.com" cannot get resource "secrets" in API group "" in the namespace "xxxxxx-dev"
'''
PS: I have access to my namespace. I have googled various forums but not able to understand it and landed here. I am new to AKS and Helm. Can anyone please share your insights. Thanks in advance
The error is not related to Helm but to Kubernetes directly and is telling you that you do not have permission to manipulate secrets in the namespace you are.
What role do you have?
For example, if you are not "root" in the cluster or the namespace, someone should grant you permission by creating a ClusterRole and assigning you to that role, for example:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: secret-writer
rules:
- apiGroups: [""]
#
# at the HTTP level, the name of the resource for accessing Secret
# objects is "secrets"
resources: ["secrets"]
verbs: ["get", "watch", "list", "update", "create", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
# This cluster role binding allows anyone in the "manager" group to read secrets in any namespace.
kind: RoleBinding
metadata:
name: write-secrets
namespace: YOUR_NAMESPACE
subjects:
- kind: User
name: YOUR_USER # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: secret-writer
apiGroup: rbac.authorization.k8s.io
Or just ask to be ClusterAdmin :D
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: aks-cluster-admins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: YOUR_USER_NAME
More details and examples here:
https://kubernetes.io/docs/reference/access-authn-authz/rbac/
By the way, if this is an AKS, have you tried to use the --admin option?
Like this:
az aks get-credentials --resource-group resource_group --name cluster_name --admin
If you have the Azure IAM rights, this will put you in the Admin mode automatically and it will give you full rights on the entire cluster.