I am using the cloud.google.com/go SDK to programmatically provision the GKE clusters with the required configuration.
I set the ClientCertificateConfig.IssueClientCertificate = true (see https://pkg.go.dev/google.golang.org/genproto/googleapis/container/v1?tab=doc#ClientCertificateConfig).
After the cluster is provisioned, I use the ca_certificate, client_key, client_secret returned for the same cluster (see https://pkg.go.dev/google.golang.org/genproto/googleapis/container/v1?tab=doc#MasterAuth). Now that I have the above 3 attributes, I try to generate the kubeconfig for this cluster (to be later used by helm)
Roughly, my kubeconfig looks something like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <base64_encoded_data>
server: https://X.X.X.X
name: gke_<project>_<location>_<name>
contexts:
- context:
cluster: gke_<project>_<location>_<name>
user: gke_<project>_<location>_<name>
name: gke_<project>_<location>_<name>
current-context: gke_<project>_<location>_<name>
kind: Config
preferences: {}
users:
- name: gke_<project>_<location>_<name>
user:
client-certificate-data: <base64_encoded_data>
client-key-data: <base64_encoded_data>
On running kubectl get nodes with above config I get the error:
Error from server (Forbidden): serviceaccounts is forbidden: User "client" cannot list resource "serviceaccounts" in API group "" at the cluster scope
Interestingly if I use the config generated by gcloud, the only change is in the user section:
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /Users/ishankhare/google-cloud-sdk/bin/gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
This configuration seems to work just fine. But as soon as I add client cert and client key data to it, it breaks:
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /Users/ishankhare/google-cloud-sdk/bin/gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
client-certificate-data: <base64_encoded_data>
client-key-data: <base64_encoded_data>
I believe I'm missing some details related to RBAC but I'm not sure what. Will you be able to provide me with some info here?
Also reffering to this question I've tried to only rely on Username - Password combination first, using that to apply a new clusterrolebinding in the cluster. But I'm unable to use just the username password approach. I get the following error:
error: You must be logged in to the server (Unauthorized)
Related
I am trying to create a cli tool for kubernetes. I need to generate Bearer Token for communicating with kubernetes API. How can I generate the token from Kubeconfig File? I do not want to use external library or kubectl.
Here is example Kubeconfig File:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01USXhNREU1TVRReU0xb1hEVE13TVRJd09ERTVNVFF5TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTUE4CmhEcDBvRVUzNTFFTEVPTzZxd3dUQkZ2U2ZxWWlGOE0yR0VZMXNLRFZ0MUNyL3czOS83QkhCYi9NOE5XaW9vdlQKZ2hsZlN2TXhsaTBRUVRSQmd5NHp2ZkNveXdBWkg0dWphYTBIcW43R2tkbUdVUC94RlZoWEIveGhmdHY5RUFBNwpMSW1CT3dZVHJ6ajRtS0JxZ3RTenhmVm5hN2J2U2oxV203bElYaTNaSkZzQmloSFlwaXIwdFZEelMzSGtEK2d0Cno1RkhOU0dnSS9MTlczOWloTU1RQ0g0ZFhtQVVueGFmdFdwUlRQOXFvSHJDWTZxVlNlbEVBYm40UWZVZ2ZUaDEKMUNhdW01bllOUjlDZ3lPOStNY0hXMTdYV0c4NGdGV3p6VUxPczVXbUo0VVY4RjdpdkVhMVJlM2Q3VkpKUEF6VwpCME4rWFFmcXg5UTArRWlXWklVQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZBV0p0Y2RLYjRRQWU2ekw4NzdvN3FQNVVWNWZNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCYWt3bE1LL2VmWUpyNlVlWEpkenBURmRaS0lYbWFEaWxhZ3ZNOGNkci9nVjJlWVlEdgpRY3FvcUwvNS95U3Y1T2ZpR0MrU25nUXhZMHp0a0VVQm04N1NOR1dYLzd1VlUwbytVV2tzZERLR2JhRmxIVE9PCmFBR3dndEZ4T1YzeTF1WnZJVm8vbW12WTNIMTBSd29uUE8yMU5HMEtRWkRFSStjRXFFb1JoeDFtaERCeGVSMUgKZzdmblBJWTFUczhWM2w0SFpGZ015anpwVWtHeUNjMVYxTDk5Vk55UHJISEg0L1FibVM5UWdkNUNWZXNlRm9HaApOVkQ4ZHRjUmpWM2tGYVVJelJ6a3lRMG1FMXk1RXRXMWVZZnF4QnAxNUN3NnlSenNWMzcrdlNab0pSS1FoNGw4CjB1b084cFhCMGQ4V1hMNml0UWp2ZjJOQnBnOU1nY0Q2QzEvZgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://192.168.1.18:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURFekNDQWZ1Z0F3SUJBZ0lJYldUcHpDV25zTVl3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURFeU1UQXhPVEUwTWpOYUZ3MHlNVEV5TVRBeE9URTBNalZhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTBGT09JcnZiTGd1SWJmVXUKd29BaG5SaktEQVFCdkp3TlliSWZkSlNGSFBhY1ljbmVUcUVSVXFZeEs4azFHRytoR0FDTlFPb2VNV3Q1anNjRwpuN0FFdHhscUJQUzNQMzBpMVhLSmZnY2Q1OXBxaG1kOVFIdFNOVTlUTVlaM2dtY0x4RGl1cXZFRGI0Q042UTl6CkI3Yk5iUDE4Y3pZdHVwbUJrY2plMFF1ZEd2dktHcWhaY1NkVFZMT3ErcTE0akM4TTM5UmgzdDk1ZEM2aWRYaUsKbWE3WGs5YnJtalJnWDZRVUJJc0xwTnkvc3dJaUFiUTlXSm1YL2VkdHhYTGpENllNK1JzQ0JkbGc5MEhhcURqdgpKSlcwQ2g4cDJkV1ZwalQrWjBMd2ZnUENBN1YzS1o4YWdldHhwQ0xQcmxlOTdnRStVM1BKbXJVY0lBaVJlbzFoCmsvOXVqUUlEQVFBQm8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0h3WURWUjBqQkJnd0ZvQVVCWW0xeDBwdmhBQjdyTXZ6dnVqdW8vbFJYbDh3RFFZSktvWklodmNOQVFFTApCUUFEZ2dFQkFDeXVKazdjdVppdzhmQW5teUdSa0trdFAzUE5LUnBCdDVnUVdjUzJuRFUrTmpIMjh1MmpGUDQ5Cm1xbjY1SGpmQU9iOVREUUlRcUtZaWdjYTViOXFYRXlDWHZEN1k1SXJ4RmN3VnEvekdZenFYWjVkR0srUnlBUlQKdm0rQzNaTDV0N2hJc1RIYWJ0SkhTYzhBeFFPWEdTd1h0YkJvdHczd2ZuSXB0alY1SG1VYjNmeG9KQUU4S1hpTgpHcXZ5alhpZHUwc1RtckszOHM5ZjZzTFdyN1lOQTlKNEh4ditkNk15ZFpSWDhjS3VRaFQzNDFRcTVEVnRCT1BoCjBpb1Mwa0JEUDF1UWlIK0tuUE9MUmtnYXAyeDhjMkZzcFVEY1hJQlBHUDBPR1VGNWFMNnhIa2NsZ0Q5eHFkU0cKMVlGVjJUamtjNHN2U1hMSkt1cmU1S2IrODcyQlZWWT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMEZPT0lydmJMZ3VJYmZVdXdvQWhuUmpLREFRQnZKd05ZYklmZEpTRkhQYWNZY25lClRxRVJVcVl4SzhrMUdHK2hHQUNOUU9vZU1XdDVqc2NHbjdBRXR4bHFCUFMzUDMwaTFYS0pmZ2NkNTlwcWhtZDkKUUh0U05VOVRNWVozZ21jTHhEaXVxdkVEYjRDTjZROXpCN2JOYlAxOGN6WXR1cG1Ca2NqZTBRdWRHdnZLR3FoWgpjU2RUVkxPcStxMTRqQzhNMzlSaDN0OTVkQzZpZFhpS21hN1hrOWJybWpSZ1g2UVVCSXNMcE55L3N3SWlBYlE5CldKbVgvZWR0eFhMakQ2WU0rUnNDQmRsZzkwSGFxRGp2SkpXMENoOHAyZFdWcGpUK1owTHdmZ1BDQTdWM0taOGEKZ2V0eHBDTFBybGU5N2dFK1UzUEptclVjSUFpUmVvMWhrLzl1alFJREFRQUJBb0lCQUEvclVxRTAyYnJiQnNIZwpTb0p5YUI4cEZjZDFSdXl5d0JNSEdZQS9HU3p0YTJYTmx6OUs3NWZ4T3pDdFgzRk9sbkRQR2Z3cjU4Sy9BN3IxCldudzVaeUxXdmxOQ24vNHFBYzl0d1RQd04walFWL09OVlBUb2Q0KzdVQkFveGxrZ3ByV0gzMUVRdWNKN2dGeWUKNFp0bFRLMVhjWHNjV01JNW1MMGJMR3V0QjRSWU5meHAwZ1AxekJ6Z2FLYjVGK2xVcFdHZ2w1dHNHay9ncm9uSwpUVkVCQmtBT0lyU0pFemc5YUJ2emJMS0h3TnZlL1QrVEdJTGVZalpRYVkxL1lLN2JpbFVkaFlQOGI2OWhxbFZnClVxc0hpRjVXNzYzenMrdXl5azNtUU1yblJKQ2ZUWDNTRWhOVm1BdTl0TXh2eE1BRk9QT1lLb3FPb25LNHdrZWwKU21HUHBnRUNnWUVBNjJhMjdWdlgrMVRlellIWmZWSW8rSi8welVmZERqZ0MvWG1zTlBWdkhXdXlkOUVRQ1JXKwpOS1FpOGdMWmNUSEpWU3RidkpRVENSQUdCL0wzM09SUTI5Tm1KNnVVUWNNR0pBSzhwckdLKytwTXF3NHRPdzMvCkhDblVQZGVaSGFVVVFnODVJeWMrbmg5QnFQWndXclk3REZEbENUOXI5cVZJN1RvS0ptd2RjdlVDZ1lFQTRvNVUKZDZXdFpjUk5vV041UUorZVJkSDRkb2daQnRjQ0ExTGNWRDdxUzYrd0s2eTdRU05zem9wWTc1MnlLWU91N2FCWQo2RlhDQVRHaG0ranN6ZE14allrV2ROdGxwbDZ4ejZRZmN6ZWgydjVUQVdpRkZyMTlqU1RkLzNrRlFDNytpeUQyCnZRSHpacXZZSUhtQ3VleldHRFJrVVB2dzk1dTFranphcEZCRHZqa0NnWUJXZUpLMXVra3FiOUN3V1FTVmZuckMKYWErNVFLNjVMR1ljeW5jeHRQNnVKZ09XODlzYUd6eVZoYjI0Zk1kM1J6eVg1cWQ2TEVLWno2TUhoSDc4UzNwUQpaZVZlcVM1NndiTWR3MHVkU0JhdjF5OTJubXlMQnVjeFowUXB1MnJwY3R4d0w3dGphR1VlSElrNEVkN1AwNlQ1Ckx6WVRJWkw5TlZZR25vMWY4OU1WaVFLQmdRQ2RKQjNnYzNGSEloYTZkM1cxNWtEd3FzZ001eTk4dUF0MFpMZmcKVTFkTnNnbWU4WXRjamdhOVorWnlKVTViVHpRNUxEd2V3c1R5OFFyb1NuSmQvVHZrc1E1N2RXWVhOSjFlcWJjSwp3cTZvYURrSXhBZDBFM0VQUW1BZEFFTXRGcXVGc3hLUlhOWUlBKysvN3FoRzc4ZzhON0xSSFQ4eGI3Wk1QWnRsCjF5cDF1UUtCZ0VGemtmR3VzeGxJU2xOY1VDUGFTUUx6bTZqYmdjdUlXcjZaN053R01pVHM3b2x5TnQrdnpiRnMKbnk5d1pnbHlsS0M2NjcreXpIa0tkbnZBdWRuS290bDhybzRCOVhjUHhGWDJ5NnpwZWIxWS91STZpVzl4Y2NSNQozbUlVS2QrOGdMczRrTUttL2dXYjZxTHdPZ3pjQWJIbTV6SVhBMXQ5TUJWYlE2ZHEvMlZDCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
I need to generate Bearer Token for communicating with kubernetes API
You cannot ”generate” these tokens. They are issued by the control plane and signed with the private key that the control plane holds. It would be a security hole if you could generate these on the client side.
Long story short --->
While passing an ssh-key, which is retrieved from a secret in Openshift to apache-camel SFTP component its not able to connect the server; whereas if I directly pass a path of the actual ssh-key file w/o creating secret to the same component, it works just fine. The exception is, invalid key. I tried to read the key file in java and pass it as ByteArray as a privateKey parameter but no luck. Seems like passing the key as byte is not working as all possible means.
SFTP-COMPONENT Properties->
sftp:
host: my.sftp.server
port: 22
fileDirectory: /to
fileName: /app/home/file.txt
username: sftp-user
privateKeyFilePath: /var/run/secret/secret-volume/ssh-privatekey **(Also tried privateKey param with byte array)**
knownHostsFile: resource:classpath:keys/known_hosts
binary: true
Application Detail:
I am using Openshift 3.11.
Developing Camel-SpringBoot Micro-Integration services configured with fabric8 and spring-cloud-kubernetes plugins for deployment.
I am creating the secret as,
oc secrets new-sshauth sshsecret --ssh-privatekey=$HOME/.ssh/id_rsa
I have tried to refer secret with deployment.yml and bootstrap.yml
Using as env variable with secret-key-ref->
deployment.yml->
- name: SSH_SECRET
valueFrom:
secretKeyRef:
name: sshsecret
key: ssh-privatekey
bootstrap.yml->
spring:
cloud:
kubernetes:
secrets:
enabled: true
enableApi: true
name: sshsecret
Using as mounted volume->
deployment.yml->
volumeMounts:
- mountPath: /var/run/secret/secret-volume
name: secret-volume
volumes:
- name: secret-volume
secret:
secretName: sshsecret
bootstrap.yml->
spring:
cloud:
kubernetes:
secrets:
enabled: true
paths: /var/run/secret/secret-volume
Note: Once the service is deployed I can see the mounted volume is attached with the container and can even bash into the POD and go to the same directory and locate the private key, which completely intact.
Any help will be appreciated. Ask me all questions you need to know to solve this.
It was a very bad mistake from my side. I was using privateKeyUri in camel SFTP component instead of privateKeyFile. I didn't rectify this and always changing those SFTP parameters in config-map directly.
By the way, for those trying to implement similar usecase; use the second option which is, mounting the secret into a volume and then refer the volume path inside Camel. Don't use the secret as ENV variable, so you need not enable secret API inside bootstrap.yml.
Thanks anyway, cheers!
Rito
I am trying to configure kubectl to use a remote Kubernetes cluster on my local windows machine following the "Install with Chocolatey on Windows" tutorial. However, I am not quite sure how to fill the config file. It should look like this somehow:
apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
or this, but I got like no idea how to fill those "variables"
apiVersion: v1
clusters:
- cluster:
server: https://123.456.789.123:9999
certificate-authority-data: yourcertificate
name: your-k8s-cluster-name
contexts:
- context:
cluster: your-k8s-cluster-name
namespace: default
user: admin
name: default-context
current-context: default-context
kind: Config
preferences: {}
users:
- name: admin
user:
token: your-login-token
This variable must be provided by your k8s cluster administrator with special kubeconfig file.
After that you can access to you cluster with --kubeconfig <path to you kubeconfig file> options:
kubectl cluster-info --kubeconfig ./.kube/config -v=7 --insecure-skip-tls-verify=true --alsologtostderr
I am using golang lib client-go to connect to a running local kubrenets. To start with I took code from the example: out-of-cluster-client-configuration.
Running a code like this:
$ KUBERNETES_SERVICE_HOST=localhost KUBERNETES_SERVICE_PORT=6443 go run ./main.go results in following error:
panic: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
goroutine 1 [running]:
/var/run/secrets/kubernetes.io/serviceaccount/
I am not quite sure which part of configuration I am missing. I've researched following links :
https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins
https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/
But with no luck.
I guess I need to either let the client-go know which token/serviceAccount to use, or configure kubectl in a way that everyone can connect to its api.
Here's status of my kubectl though some commands results:
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://localhost:6443
name: docker-for-desktop-cluster
contexts:
- context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop
current-context: docker-for-desktop
kind: Config
preferences: {}
users:
- name: docker-for-desktop
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
$ kubectl get serviceAccounts
NAME SECRETS AGE
default 1 3d
test-user 1 1d
$ kubectl describe serviceaccount test-user
Name: test-user
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: test-user-token-hxcsk
Tokens: test-user-token-hxcsk
Events: <none>
$ kubectl get secret test-user-token-hxcsk -o yaml
apiVersion: v1
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0......=
namespace: ZGVmYXVsdA==
token: ZXlKaGJHY2lPaUpTVXpJMU5pSX......=
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: test-user
kubernetes.io/service-account.uid: 984b359a-6bd3-11e8-8600-XXXXXXX
creationTimestamp: 2018-06-09T10:55:17Z
name: test-user-token-hxcsk
namespace: default
resourceVersion: "110618"
selfLink: /api/v1/namespaces/default/secrets/test-user-token-hxcsk
uid: 98550de5-6bd3-11e8-8600-XXXXXX
type: kubernetes.io/service-account-token
This answer could be a little outdated but I will try to give more perspective/baseline for future readers that encounter the same/similar problem.
TL;DR
The following error:
panic: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
is most likely connected with the lack of token in the /var/run/secrets/kubernetes.io/serviceaccount location when using in-cluster-client-configuration. Also, it could be related to the fact of using in-cluster-client-configuration code outside of the cluster (for example running this code directly on a laptop or in pure Docker container).
You can check following commands to troubleshoot your issue further (assuming this code is running inside a Pod):
$ kubectl get serviceaccount X -o yaml:
look for: automountServiceAccountToken: false
$ kubectl describe pod XYZ
look for: containers.mounts and volumeMounts where Secret is mounted
Citing the official documentation:
Authenticating inside the cluster
This example shows you how to configure a client with client-go to authenticate to the Kubernetes API from an application running inside the Kubernetes cluster.
client-go uses the Service Account token mounted inside the Pod at the /var/run/secrets/kubernetes.io/serviceaccount path when the rest.InClusterConfig() is used.
-- Github.com: Kubernetes: client-go: Examples: in cluster client configuration
If you are authenticating to the Kubernetes API with ~/.kube/config you should be using the out-of-cluster-client-configuration.
Additional information:
I've added additional information for more reference on further troubleshooting when the code is run inside of a Pod.
automountServiceAccountToken: false
In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: go-serviceaccount
automountServiceAccountToken: false
In version 1.6+, you can also opt out of automounting API credentials for a particular pod:
apiVersion: v1
kind: Pod
metadata:
name: sdk
spec:
serviceAccountName: go-serviceaccount
automountServiceAccountToken: false
-- Kubernetes.io: Docs: Tasks: Configure pod container: Configure service account
$ kubectl describe pod XYZ:
When the servicAccount token is mounted, the Pod definition should look like this:
<-- OMITTED -->
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from go-serviceaccount-token-4rst8 (ro)
<-- OMITTED -->
Volumes:
go-serviceaccount-token-4rst8:
Type: Secret (a volume populated by a Secret)
SecretName: go-serviceaccount-token-4rst8
Optional: false
If it's not:
<-- OMITTED -->
Mounts: <none>
<-- OMITTED -->
Volumes: <none>
Additional resources:
Kubernetes.io: Docs: Reference: Access authn authz: Authentication
Just to make it clear, in case it helps you further debug it: the problem has nothing to do with Go or your code, and everything to do with the Kubernetes node not being able to get a token from the Kubernetes master.
In kubectl config view, clusters.cluster.server should probably point at an IP address that the node can reach.
It needs to access the CA, i.e., the master, in order to provide that token, and I'm guessing it fails to for that reason.
kubectl describe <your_pod_name> would probably tell you what the problem was acquiring the token.
Since you assumed the problem was Go/your code and focused on that, you neglected to provide more information about your Kubernetes setup, which makes it more difficult for me to give you a better answer than my guess above ;-)
But I hope it helps!
I have a simple application that Gets and Puts information from a Datastore.
It works everywhere, but when I run it from inside the Kubernetes Engine cluster, I get this output:
Error from Get()
rpc error: code = PermissionDenied desc = Request had insufficient authentication scopes.
Error from Put()
rpc error: code = PermissionDenied desc = Request had insufficient authentication scopes.
I'm using the cloud.google.com/go/datastore package and the Go language.
I don't know why I'm getting this error since the application works everywhere else just fine.
Update:
Looking for an answer I found this comment on Google Groups:
In order to use Cloud Datastore from GCE, the instance needs to be
configured with a couple of extra scopes. These can't be added to
existing GCE instances, but you can create a new one with the
following Cloud SDK command:
gcloud compute instances create hello-datastore --project
--zone --scopes datastore userinfo-email
Would that mean I can't use Datastore from GKE by default?
Update 2:
I can see that when creating my cluster I didn't enable any permissions (which are disabled for most services by default). I suppose that's what's causing the issue:
Strangely, I can use CloudSQL just fine even though it's disabled (using the cloudsql_proxy container).
So what I learnt in the process of debugging this issue was that:
During the creation of a Kubernetes Cluster you can specify permissions for the GCE nodes that will be created.
If you for example enable Datastore access on the cluster nodes during creation, you will be able to access Datastore directly from the Pods without having to set up anything else.
If your cluster node permissions are disabled for most things (default settings) like mine were, you will need to create an appropriate Service Account for each application that wants to use a GCP resource like Datastore.
Another alternative is to create a new node pool with the gcloud command, set the desired permission scopes and then migrate all deployments to the new node pool (rather tedious).
So at the end of the day I fixed the issue by creating a Service Account for my application, downloading the JSON authentication key, creating a Kubernetes secret which contains that key, and in the case of Datastore, I set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of the mounted secret JSON key.
This way when my application starts, it checks if the GOOGLE_APPLICATION_CREDENTIALS variable is present, and authenticates Datastore API access based on the JSON key that the variable points to.
Deployment YAML snippet:
...
containers:
- image: foo
name: foo
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /auth/credentials.json
volumeMounts:
- name: foo-service-account
mountPath: "/auth"
readOnly: true
volumes:
- name: foo-service-account
secret:
secretName: foo-service-account
After struggling some hours, I was also able to connect to the datastore. Here are my results, most of if from Google Docs:
Create Service Account
gcloud iam service-accounts create [SERVICE_ACCOUNT_NAME]
Get full iam account name
gcloud iam service-accounts list
The result will look something like this:
[SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com
Give owner access to the project for the service account
gcloud projects add-iam-policy-binding [PROJECT_NAME] --member serviceAccount:[SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com --role roles/owner
Create key-file
gcloud iam service-accounts keys create mycredentials.json --iam-account [SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com
Create app-key Secret
kubectl create secret generic app-key --from-file=credentials.json=mycredentials.json
This app-key secret will then be mounted in the deployment.yaml
Edit deyployment file
deployment.yaml:
...
spec:
containers:
- name: app
image: eu.gcr.io/google_project_id/springapplication:v1
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/credentials.json
ports:
- name: http-server
containerPort: 8080
volumes:
- name: google-cloud-key
secret:
secretName: app-key
I was using a minimalistic Dockerfile like:
FROM SCRATCH
ADD main /
EXPOSE 80
CMD ["/main"]
which kept my go app in an indefinite "hanging" state when trying to connect to the GCP Datastore. After LOTS of playing I figured out that the SCRATCH Docker image might be missing certain environment tools / variables / libraries which the Google cloud library requires. Using this Dockerfile now works:
FROM golang:alpine
RUN apk add --no-cache ca-certificates
ADD main /
EXPOSE 80
CMD ["/main"]
It does not require me to provide the google credentials environment variable. The library seems to automatically understand where it's running in (maybe from the context.Background() ?) and automatically uses a default service account which Google creates for you when you create your cluster on GKE.