client-go get configMap issue - go

I am trying to create a simple deployment on Kubernetes using client-go. Following the example, I am creating the inClusterConfig for the client. Also, I have created a role and a roleBinding for deployment SA to get, list and watch ConfigMap objects.
When using the Client.Get(), I am getting
Get "https://10.96.0.1:443/api/v1/namespaces/default/configmaps": Access Denied
Tried to exec into the pod and with
curl https://10.96.0.1:443/api/v1/namespaces/default/configmaps with the token mounted in
/var/run/secrets/..../token i was able to get the CM
any idea ?
also have created corresponding role and rolebinding
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: got-dynamic-cm-reader-Role
subjects:
- kind: ServiceAccount
name: got
namespace: default

In the example that you have mentioned, the service account that has been created has only cluster viewer role, which is insufficient if you are trying to create resources.
Try binding an admin role binding to the service account. But, for production purposes, you should use a more granular approach on adding permissions to the service account.

Related

Hazelcast deployment on Kubernetes without Cluster Roles

I have spring boot application with embedded hazelcast that I am trying to deploy on a shared Kubernetes platform. I want to use kubernetes API strategy for auto discovery. Can I do this without creating Cluster Roles and Cluster Role Bindings and have just Role and Role Binding created under my namespace. If yes, what would the rbac.yaml look like ?
Tried creating the following roles and role bindings but no auto discovery so far.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: hazelcast-role
namespace: dev
rules:
- apiGroups:
- ""
resources:
- endpoints
- pods
- nodes
- services
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: hazelcast-role-binding
namespace: dev
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: hazelcast-role
subjects:
- kind: ServiceAccount
name: default
namespace: dev

Getting Kubernetes manifest error when install using Helm command [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
When doing helm install -f values.yaml xxx-xxx-Agent xxxx-repo/xxx-agent --namespace xxxxx-dev
getting below error
'''
Error: rendered manifests contain a resource that already exists. Unable to continue with install: could not get information about the resource: secrets "azpsecretxxx" is forbidden: User "xxxxxxxxxxxx#xxxxx.com" cannot get resource "secrets" in API group "" in the namespace "xxxxxx-dev"
'''
PS: I have access to my namespace. I have googled various forums but not able to understand it and landed here. I am new to AKS and Helm. Can anyone please share your insights. Thanks in advance
The error is not related to Helm but to Kubernetes directly and is telling you that you do not have permission to manipulate secrets in the namespace you are.
What role do you have?
For example, if you are not "root" in the cluster or the namespace, someone should grant you permission by creating a ClusterRole and assigning you to that role, for example:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: secret-writer
rules:
- apiGroups: [""]
#
# at the HTTP level, the name of the resource for accessing Secret
# objects is "secrets"
resources: ["secrets"]
verbs: ["get", "watch", "list", "update", "create", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
# This cluster role binding allows anyone in the "manager" group to read secrets in any namespace.
kind: RoleBinding
metadata:
name: write-secrets
namespace: YOUR_NAMESPACE
subjects:
- kind: User
name: YOUR_USER # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: secret-writer
apiGroup: rbac.authorization.k8s.io
Or just ask to be ClusterAdmin :D
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: aks-cluster-admins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: YOUR_USER_NAME
More details and examples here:
https://kubernetes.io/docs/reference/access-authn-authz/rbac/
By the way, if this is an AKS, have you tried to use the --admin option?
Like this:
az aks get-credentials --resource-group resource_group --name cluster_name --admin
If you have the Azure IAM rights, this will put you in the Admin mode automatically and it will give you full rights on the entire cluster.

Spring config server unable to fetch properties from codecommit

I'm running into an issue using Kubernetes service accounts to grant access to a codecommit repository for a spring config server.
When AWSCodeCommitReadOnly is granted to the EKS cluster.worker-node role, the config server is able to successfully get the properties, however replicating this using service accounts causes the config server to throw the following error:
Cannot clone or checkout repository: https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/config-server-properties
A separate IAM role has been created with the CodeCommit policy, and this is being attached to a service account with the annotation:
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::accountnum:role/test-pod-iam-permissions
The iam role has a trusted entity for the eks cluster and the following condition:
system:serviceaccount:namespace:test-pod-iam-permissions
We've also created a clusterrole which should have access to all verbs/resources:
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
* [] [] [*]
and an associated binding:
Name: iam-permissions-binding
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: iam-permissions
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount test-pod-iam-permissions namespace
Following this documentation https://docs.aws.amazon.com/eks/latest/userguide/specify-service-account-role.html we seem to have ticked all the boxes, so we're not sure what could be missing.
The deployment has the service account added, and when we exec into the pod it shoes the IAM role ARN:
$ kubectl exec -n namespace config-service-pod env | grep AWS
AWS_DEFAULT_REGION=eu-west-1
AWS_ROLE_ARN=arn:aws:iam::accountnum:role/test-pod-iam-permissions
AWS_REGION=eu-west-1
AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token
Could a separate serviceaccount be overriding the permissions we're trying to grant here?
We've updated the config servers pom to use 1.11.623 for aws-java-sdk-core and added in a dependency for
aws-java-sdk-sts

Access Kubernetes Dashboard on EC2 Remotely

I setup a K8s cluster in EC2 and launched kubernetes dashboard by following these links:
https://github.com/kubernetes/dashboard
https://github.com/kubernetes/dashboard/wiki/Access-control
Here are commands I ran:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
As I setup a few IPs into security group for the EC2 instances, I assume only those IPs can access the dashboard, so no worry about the security here.
When I try to access the dashboard using:
http://<My_IP>:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Now what's the easiest way to access the dashboard?
I noticed there are several related questions, but seems no one really answer it.
Thanks a lot.
P.S. Just noticed some errors in Dashboard log. Something wrong with dashboard running?
You can use service with type:Loadbalancer and use loadBalancerSourceRanges: to limits access to your dashboard.
Have you done the ClusterRoleBinding for serviceaccount kubernetes-dashboard.? If not, apply the below yaml file changes, so that the serviceaccount will get cluster-admin roles to access all kubernetes resources.
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
It depends on what ServiceAccount or User you are using to connect to the kube-apiserver. If you want to have access without look for details of policy, literally give access to everything, your RBAC file can look similar to this:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: my-cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: <your-user-from-your-~/.kube/config>
Than pass a command:
kubectl apply -f <filename>
Second approach:
kubectl create clusterrolebinding my-cluster-admin --clusterrole=cluster-admin --user=<your-user-from-your-~/.kube/config>
You can also use a Group or ServiceAccount in User field. Look for official documentation about RBAC Authorization here.
Also what I found is great tutorial if you wanna take it step by step.

Fail to connect to kubectl from client-go - /serviceaccount/token: no such file

I am using golang lib client-go to connect to a running local kubrenets. To start with I took code from the example: out-of-cluster-client-configuration.
Running a code like this:
$ KUBERNETES_SERVICE_HOST=localhost KUBERNETES_SERVICE_PORT=6443 go run ./main.go results in following error:
panic: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
goroutine 1 [running]:
/var/run/secrets/kubernetes.io/serviceaccount/
I am not quite sure which part of configuration I am missing. I've researched following links :
https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins
https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/
But with no luck.
I guess I need to either let the client-go know which token/serviceAccount to use, or configure kubectl in a way that everyone can connect to its api.
Here's status of my kubectl though some commands results:
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://localhost:6443
name: docker-for-desktop-cluster
contexts:
- context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop
current-context: docker-for-desktop
kind: Config
preferences: {}
users:
- name: docker-for-desktop
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
$ kubectl get serviceAccounts
NAME SECRETS AGE
default 1 3d
test-user 1 1d
$ kubectl describe serviceaccount test-user
Name: test-user
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: test-user-token-hxcsk
Tokens: test-user-token-hxcsk
Events: <none>
$ kubectl get secret test-user-token-hxcsk -o yaml
apiVersion: v1
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0......=
namespace: ZGVmYXVsdA==
token: ZXlKaGJHY2lPaUpTVXpJMU5pSX......=
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: test-user
kubernetes.io/service-account.uid: 984b359a-6bd3-11e8-8600-XXXXXXX
creationTimestamp: 2018-06-09T10:55:17Z
name: test-user-token-hxcsk
namespace: default
resourceVersion: "110618"
selfLink: /api/v1/namespaces/default/secrets/test-user-token-hxcsk
uid: 98550de5-6bd3-11e8-8600-XXXXXX
type: kubernetes.io/service-account-token
This answer could be a little outdated but I will try to give more perspective/baseline for future readers that encounter the same/similar problem.
TL;DR
The following error:
panic: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
is most likely connected with the lack of token in the /var/run/secrets/kubernetes.io/serviceaccount location when using in-cluster-client-configuration. Also, it could be related to the fact of using in-cluster-client-configuration code outside of the cluster (for example running this code directly on a laptop or in pure Docker container).
You can check following commands to troubleshoot your issue further (assuming this code is running inside a Pod):
$ kubectl get serviceaccount X -o yaml:
look for: automountServiceAccountToken: false
$ kubectl describe pod XYZ
look for: containers.mounts and volumeMounts where Secret is mounted
Citing the official documentation:
Authenticating inside the cluster
This example shows you how to configure a client with client-go to authenticate to the Kubernetes API from an application running inside the Kubernetes cluster.
client-go uses the Service Account token mounted inside the Pod at the /var/run/secrets/kubernetes.io/serviceaccount path when the rest.InClusterConfig() is used.
-- Github.com: Kubernetes: client-go: Examples: in cluster client configuration
If you are authenticating to the Kubernetes API with ~/.kube/config you should be using the out-of-cluster-client-configuration.
Additional information:
I've added additional information for more reference on further troubleshooting when the code is run inside of a Pod.
automountServiceAccountToken: false
In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: go-serviceaccount
automountServiceAccountToken: false
In version 1.6+, you can also opt out of automounting API credentials for a particular pod:
apiVersion: v1
kind: Pod
metadata:
name: sdk
spec:
serviceAccountName: go-serviceaccount
automountServiceAccountToken: false
-- Kubernetes.io: Docs: Tasks: Configure pod container: Configure service account
$ kubectl describe pod XYZ:
When the servicAccount token is mounted, the Pod definition should look like this:
<-- OMITTED -->
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from go-serviceaccount-token-4rst8 (ro)
<-- OMITTED -->
Volumes:
go-serviceaccount-token-4rst8:
Type: Secret (a volume populated by a Secret)
SecretName: go-serviceaccount-token-4rst8
Optional: false
If it's not:
<-- OMITTED -->
Mounts: <none>
<-- OMITTED -->
Volumes: <none>
Additional resources:
Kubernetes.io: Docs: Reference: Access authn authz: Authentication
Just to make it clear, in case it helps you further debug it: the problem has nothing to do with Go or your code, and everything to do with the Kubernetes node not being able to get a token from the Kubernetes master.
In kubectl config view, clusters.cluster.server should probably point at an IP address that the node can reach.
It needs to access the CA, i.e., the master, in order to provide that token, and I'm guessing it fails to for that reason.
kubectl describe <your_pod_name> would probably tell you what the problem was acquiring the token.
Since you assumed the problem was Go/your code and focused on that, you neglected to provide more information about your Kubernetes setup, which makes it more difficult for me to give you a better answer than my guess above ;-)
But I hope it helps!

Resources