How to Generate Bearer Token From Kubeconfig File Programmatically On Outside of Kubernetes With Golang - go

I am trying to create a cli tool for kubernetes. I need to generate Bearer Token for communicating with kubernetes API. How can I generate the token from Kubeconfig File? I do not want to use external library or kubectl.
Here is example Kubeconfig File:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01USXhNREU1TVRReU0xb1hEVE13TVRJd09ERTVNVFF5TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTUE4CmhEcDBvRVUzNTFFTEVPTzZxd3dUQkZ2U2ZxWWlGOE0yR0VZMXNLRFZ0MUNyL3czOS83QkhCYi9NOE5XaW9vdlQKZ2hsZlN2TXhsaTBRUVRSQmd5NHp2ZkNveXdBWkg0dWphYTBIcW43R2tkbUdVUC94RlZoWEIveGhmdHY5RUFBNwpMSW1CT3dZVHJ6ajRtS0JxZ3RTenhmVm5hN2J2U2oxV203bElYaTNaSkZzQmloSFlwaXIwdFZEelMzSGtEK2d0Cno1RkhOU0dnSS9MTlczOWloTU1RQ0g0ZFhtQVVueGFmdFdwUlRQOXFvSHJDWTZxVlNlbEVBYm40UWZVZ2ZUaDEKMUNhdW01bllOUjlDZ3lPOStNY0hXMTdYV0c4NGdGV3p6VUxPczVXbUo0VVY4RjdpdkVhMVJlM2Q3VkpKUEF6VwpCME4rWFFmcXg5UTArRWlXWklVQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZBV0p0Y2RLYjRRQWU2ekw4NzdvN3FQNVVWNWZNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCYWt3bE1LL2VmWUpyNlVlWEpkenBURmRaS0lYbWFEaWxhZ3ZNOGNkci9nVjJlWVlEdgpRY3FvcUwvNS95U3Y1T2ZpR0MrU25nUXhZMHp0a0VVQm04N1NOR1dYLzd1VlUwbytVV2tzZERLR2JhRmxIVE9PCmFBR3dndEZ4T1YzeTF1WnZJVm8vbW12WTNIMTBSd29uUE8yMU5HMEtRWkRFSStjRXFFb1JoeDFtaERCeGVSMUgKZzdmblBJWTFUczhWM2w0SFpGZ015anpwVWtHeUNjMVYxTDk5Vk55UHJISEg0L1FibVM5UWdkNUNWZXNlRm9HaApOVkQ4ZHRjUmpWM2tGYVVJelJ6a3lRMG1FMXk1RXRXMWVZZnF4QnAxNUN3NnlSenNWMzcrdlNab0pSS1FoNGw4CjB1b084cFhCMGQ4V1hMNml0UWp2ZjJOQnBnOU1nY0Q2QzEvZgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://192.168.1.18:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURFekNDQWZ1Z0F3SUJBZ0lJYldUcHpDV25zTVl3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURFeU1UQXhPVEUwTWpOYUZ3MHlNVEV5TVRBeE9URTBNalZhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTBGT09JcnZiTGd1SWJmVXUKd29BaG5SaktEQVFCdkp3TlliSWZkSlNGSFBhY1ljbmVUcUVSVXFZeEs4azFHRytoR0FDTlFPb2VNV3Q1anNjRwpuN0FFdHhscUJQUzNQMzBpMVhLSmZnY2Q1OXBxaG1kOVFIdFNOVTlUTVlaM2dtY0x4RGl1cXZFRGI0Q042UTl6CkI3Yk5iUDE4Y3pZdHVwbUJrY2plMFF1ZEd2dktHcWhaY1NkVFZMT3ErcTE0akM4TTM5UmgzdDk1ZEM2aWRYaUsKbWE3WGs5YnJtalJnWDZRVUJJc0xwTnkvc3dJaUFiUTlXSm1YL2VkdHhYTGpENllNK1JzQ0JkbGc5MEhhcURqdgpKSlcwQ2g4cDJkV1ZwalQrWjBMd2ZnUENBN1YzS1o4YWdldHhwQ0xQcmxlOTdnRStVM1BKbXJVY0lBaVJlbzFoCmsvOXVqUUlEQVFBQm8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0h3WURWUjBqQkJnd0ZvQVVCWW0xeDBwdmhBQjdyTXZ6dnVqdW8vbFJYbDh3RFFZSktvWklodmNOQVFFTApCUUFEZ2dFQkFDeXVKazdjdVppdzhmQW5teUdSa0trdFAzUE5LUnBCdDVnUVdjUzJuRFUrTmpIMjh1MmpGUDQ5Cm1xbjY1SGpmQU9iOVREUUlRcUtZaWdjYTViOXFYRXlDWHZEN1k1SXJ4RmN3VnEvekdZenFYWjVkR0srUnlBUlQKdm0rQzNaTDV0N2hJc1RIYWJ0SkhTYzhBeFFPWEdTd1h0YkJvdHczd2ZuSXB0alY1SG1VYjNmeG9KQUU4S1hpTgpHcXZ5alhpZHUwc1RtckszOHM5ZjZzTFdyN1lOQTlKNEh4ditkNk15ZFpSWDhjS3VRaFQzNDFRcTVEVnRCT1BoCjBpb1Mwa0JEUDF1UWlIK0tuUE9MUmtnYXAyeDhjMkZzcFVEY1hJQlBHUDBPR1VGNWFMNnhIa2NsZ0Q5eHFkU0cKMVlGVjJUamtjNHN2U1hMSkt1cmU1S2IrODcyQlZWWT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMEZPT0lydmJMZ3VJYmZVdXdvQWhuUmpLREFRQnZKd05ZYklmZEpTRkhQYWNZY25lClRxRVJVcVl4SzhrMUdHK2hHQUNOUU9vZU1XdDVqc2NHbjdBRXR4bHFCUFMzUDMwaTFYS0pmZ2NkNTlwcWhtZDkKUUh0U05VOVRNWVozZ21jTHhEaXVxdkVEYjRDTjZROXpCN2JOYlAxOGN6WXR1cG1Ca2NqZTBRdWRHdnZLR3FoWgpjU2RUVkxPcStxMTRqQzhNMzlSaDN0OTVkQzZpZFhpS21hN1hrOWJybWpSZ1g2UVVCSXNMcE55L3N3SWlBYlE5CldKbVgvZWR0eFhMakQ2WU0rUnNDQmRsZzkwSGFxRGp2SkpXMENoOHAyZFdWcGpUK1owTHdmZ1BDQTdWM0taOGEKZ2V0eHBDTFBybGU5N2dFK1UzUEptclVjSUFpUmVvMWhrLzl1alFJREFRQUJBb0lCQUEvclVxRTAyYnJiQnNIZwpTb0p5YUI4cEZjZDFSdXl5d0JNSEdZQS9HU3p0YTJYTmx6OUs3NWZ4T3pDdFgzRk9sbkRQR2Z3cjU4Sy9BN3IxCldudzVaeUxXdmxOQ24vNHFBYzl0d1RQd04walFWL09OVlBUb2Q0KzdVQkFveGxrZ3ByV0gzMUVRdWNKN2dGeWUKNFp0bFRLMVhjWHNjV01JNW1MMGJMR3V0QjRSWU5meHAwZ1AxekJ6Z2FLYjVGK2xVcFdHZ2w1dHNHay9ncm9uSwpUVkVCQmtBT0lyU0pFemc5YUJ2emJMS0h3TnZlL1QrVEdJTGVZalpRYVkxL1lLN2JpbFVkaFlQOGI2OWhxbFZnClVxc0hpRjVXNzYzenMrdXl5azNtUU1yblJKQ2ZUWDNTRWhOVm1BdTl0TXh2eE1BRk9QT1lLb3FPb25LNHdrZWwKU21HUHBnRUNnWUVBNjJhMjdWdlgrMVRlellIWmZWSW8rSi8welVmZERqZ0MvWG1zTlBWdkhXdXlkOUVRQ1JXKwpOS1FpOGdMWmNUSEpWU3RidkpRVENSQUdCL0wzM09SUTI5Tm1KNnVVUWNNR0pBSzhwckdLKytwTXF3NHRPdzMvCkhDblVQZGVaSGFVVVFnODVJeWMrbmg5QnFQWndXclk3REZEbENUOXI5cVZJN1RvS0ptd2RjdlVDZ1lFQTRvNVUKZDZXdFpjUk5vV041UUorZVJkSDRkb2daQnRjQ0ExTGNWRDdxUzYrd0s2eTdRU05zem9wWTc1MnlLWU91N2FCWQo2RlhDQVRHaG0ranN6ZE14allrV2ROdGxwbDZ4ejZRZmN6ZWgydjVUQVdpRkZyMTlqU1RkLzNrRlFDNytpeUQyCnZRSHpacXZZSUhtQ3VleldHRFJrVVB2dzk1dTFranphcEZCRHZqa0NnWUJXZUpLMXVra3FiOUN3V1FTVmZuckMKYWErNVFLNjVMR1ljeW5jeHRQNnVKZ09XODlzYUd6eVZoYjI0Zk1kM1J6eVg1cWQ2TEVLWno2TUhoSDc4UzNwUQpaZVZlcVM1NndiTWR3MHVkU0JhdjF5OTJubXlMQnVjeFowUXB1MnJwY3R4d0w3dGphR1VlSElrNEVkN1AwNlQ1Ckx6WVRJWkw5TlZZR25vMWY4OU1WaVFLQmdRQ2RKQjNnYzNGSEloYTZkM1cxNWtEd3FzZ001eTk4dUF0MFpMZmcKVTFkTnNnbWU4WXRjamdhOVorWnlKVTViVHpRNUxEd2V3c1R5OFFyb1NuSmQvVHZrc1E1N2RXWVhOSjFlcWJjSwp3cTZvYURrSXhBZDBFM0VQUW1BZEFFTXRGcXVGc3hLUlhOWUlBKysvN3FoRzc4ZzhON0xSSFQ4eGI3Wk1QWnRsCjF5cDF1UUtCZ0VGemtmR3VzeGxJU2xOY1VDUGFTUUx6bTZqYmdjdUlXcjZaN053R01pVHM3b2x5TnQrdnpiRnMKbnk5d1pnbHlsS0M2NjcreXpIa0tkbnZBdWRuS290bDhybzRCOVhjUHhGWDJ5NnpwZWIxWS91STZpVzl4Y2NSNQozbUlVS2QrOGdMczRrTUttL2dXYjZxTHdPZ3pjQWJIbTV6SVhBMXQ5TUJWYlE2ZHEvMlZDCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

I need to generate Bearer Token for communicating with kubernetes API
You cannot ”generate” these tokens. They are issued by the control plane and signed with the private key that the control plane holds. It would be a security hole if you could generate these on the client side.

Related

Kubernetes API library support for apiVersion - "authentication.gke.io/v2alpha1" vs "v1"

v1 - v1 version was the first stable version release of the Kubernetes API. It contains many core objects in Kubernetes.
authentication.gke.io/v2alpha1 - API versions with ‘alpha’ in their name are early candidates for new functionality coming into Kubernetes. This is not stable for use. These may contain bugs.
For alpha version of authentication with API server(Kubernetes control plane), we are stuck with kubernetes API library support for kind:ClientConfig.
In our scenario(company), we have authentication.gke.io/v2alpha1 version type, with kind:ClientConfig, to authenticate kubernetes api server.
Below are sample templates for both versions:
kind: ClientConfig
apiVersion: authentication.gke.io/v2alpha1
spec:
name: dev-corp
server: https://10.x.x.x:443
certificateAuthorityData: ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc
authentication:
- name: oidc
oidc:
clientID: aaaaad3-9aa1-33c8-dd0-ddddd6b5bf5
clientSecret: ccccccccccccccccc-
issuerURI: https://login.microsoftonline.com/aaaa92-aab7-bbfa-cccf-ddaaaaaaaa/v2.0
kubectlRedirectURI: http://localhost:12345/callback
cloudConsoleRedirectURI: http://console.cloud.google.com/kubernetes/oidc
scopes: offline_access,profile
userClaim: upn
userPrefix: '-'
groupsClaim: groups
preferredAuthentication: oidc
kind: Config
apiVersion: v1
clusters:
- cluster:
server: https://192.168.10.190:6443
name: cluster-1
- cluster:
server: https://192.168.99.101:8443
name: cluster-2
contexts:
- context:
cluster: cluster-1
user: kubernetes-admin-1
name: cluster-1
- context:
cluster: cluster-2
user: kubernetes-admin-2
name: cluster-2
preferences: {}
users:
- name: kubernetes-admin-1
user:
client-certificate: /home/user/.minikube/credential-for-cluster-1.crt
client-key: /home/user/.minikube/credential-for-cluster-1.key
- name: kubernetes-admin-2
user:
client-certificate: /home/user/.minikube/credential-for-cluster-2.crt
client-key: /home/user/.minikube/credential-for-cluster-2.key
kubernetes/client-go library from Kubernetes does not support kind:ClientConfig and gives error: no kind "ClientConfig" is registered for version "authentication.gke.io/v2alpha1"
https://github.com/kubernetes/client-go/issues/1151
Is there a kubenertes API library support to load kind:ClientConfig configuration?

passing application configuration using K8s configmaps

How to pass in the application.properties to the Spring boot application using configmaps. Since the application.yml file contains sensitive information, this requires to pass in secrets and configmaps. In this case what options do we have to pass in both the sensitive and non-sensitive configuration data to the Spring boot pod.
I am currently using Spring cloud config server and Spring cloud config server can encrypt the sensitive data using the encrypt.key and decrypt the key.
ConfigMaps as described by #paltaa would do the trick for non-sensitive information. For sensitive information I would use a sealedSecret.
Sealed Secrets is composed of two parts:
A cluster-side controller / operator
A client-side utility: kubeseal
The kubeseal utility uses asymmetric crypto to encrypt secrets that only the controller can decrypt.
These encrypted secrets are encoded in a SealedSecret resource, which you can see as a recipe for creating a secret.
Once installed you create your secret as normal and you can then:
kubeseal --format=yaml < secret.yaml > sealed-secret.yaml
You can safely push your sealedSecret to github etc.
This normal kubernetes secret will appear in the cluster after a few seconds and you can use it as you would use any secret that you would have created directly (e.g. reference it from a Pod).
You can mount Secret as volumes, the same as ConfigMaps. For example:
Create the secret.
kubectl create secret generic ssh-key-secret --from-file=application.properties
Then mount it as volume:
apiVersion: v1
kind: Pod
metadata:
name: secret-test-pod
labels:
name: secret-test
spec:
volumes:
- name: secret-volume
secret:
secretName: ssh-key-secret
containers:
- name: ssh-test-container
image: mySshImage
volumeMounts:
- name: secret-volume
readOnly: true
mountPath: "/etc/secret-volume"
More information in https://kubernetes.io/docs/concepts/configuration/secret/

Understanding sourcing secrets in kubernetes spring boot app

I am following this guide to consume secrets: https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/index.html#secrets-propertysource.
It says roughly.
save secrets
reference secrets in deployment.yml file
containers:
- env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-secret
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
Then it says "You can select the Secrets to consume in a number of ways:" and gives 3 examples. However without doing any of these steps I can still see the secrets in my env perfectly. Futhermore the operations in step 1 and step 2 operate independently of spring boot(save and move secrets into environment variables)
My questions:
If I make the changes suggested in step 3 what changes/improvements does it make for my container/app/pod?
Is there no way to be able to avoid all the mapping in step 1 and put all secrets in an env?
they write -Dspring.cloud.kubernetes.secrets.paths=/etc/secrets to source all secrets, how is it they knew secrets were in a folder called /etc/
You can mount all env variables from secret in the following way:
containers:
- name: app
envFrom:
- secretRef:
name: db-secret
As for where Spring gets secrets from - I'm not an expert in Spring but it seems there is already an explanation in the link you provided:
When enabled, the Fabric8SecretsPropertySource looks up Kubernetes for
Secrets from the following sources:
Reading recursively from secrets mounts
Named after the application (as defined by spring.application.name)
Matching some labels
So it takes secrets from secrets mount (if you mount them as volumes). It also scans Kubernetes API for secrets (i guess in the same namespaces the app is running in). It can do it by utilizing Kubernetes serviceaccount token which by default is always mounted into the pod. It is up to what Kubernetes RBAC permissions are given to pod's serviceaccount.
So it tries to search secrets using Kubernetes API and match them against application name or application labels.

Connect to dynamically create new cluster on GKE

I am using the cloud.google.com/go SDK to programmatically provision the GKE clusters with the required configuration.
I set the ClientCertificateConfig.IssueClientCertificate = true (see https://pkg.go.dev/google.golang.org/genproto/googleapis/container/v1?tab=doc#ClientCertificateConfig).
After the cluster is provisioned, I use the ca_certificate, client_key, client_secret returned for the same cluster (see https://pkg.go.dev/google.golang.org/genproto/googleapis/container/v1?tab=doc#MasterAuth). Now that I have the above 3 attributes, I try to generate the kubeconfig for this cluster (to be later used by helm)
Roughly, my kubeconfig looks something like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <base64_encoded_data>
server: https://X.X.X.X
name: gke_<project>_<location>_<name>
contexts:
- context:
cluster: gke_<project>_<location>_<name>
user: gke_<project>_<location>_<name>
name: gke_<project>_<location>_<name>
current-context: gke_<project>_<location>_<name>
kind: Config
preferences: {}
users:
- name: gke_<project>_<location>_<name>
user:
client-certificate-data: <base64_encoded_data>
client-key-data: <base64_encoded_data>
On running kubectl get nodes with above config I get the error:
Error from server (Forbidden): serviceaccounts is forbidden: User "client" cannot list resource "serviceaccounts" in API group "" at the cluster scope
Interestingly if I use the config generated by gcloud, the only change is in the user section:
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /Users/ishankhare/google-cloud-sdk/bin/gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
This configuration seems to work just fine. But as soon as I add client cert and client key data to it, it breaks:
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /Users/ishankhare/google-cloud-sdk/bin/gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
client-certificate-data: <base64_encoded_data>
client-key-data: <base64_encoded_data>
I believe I'm missing some details related to RBAC but I'm not sure what. Will you be able to provide me with some info here?
Also reffering to this question I've tried to only rely on Username - Password combination first, using that to apply a new clusterrolebinding in the cluster. But I'm unable to use just the username password approach. I get the following error:
error: You must be logged in to the server (Unauthorized)

Adding authentication proxy in front of kubernetes

I'm adding a proxy in front of kubernetes API in order to authenticate users (among other actions) with a homemade authentication system.
I've modified my kube configuration to have kubectl hitting the proxy. The proxy has its own kubeconfig with a valid certificate-authority-data, so I don't need any credentials on my side.
So far this is working fine, here is the minimum configuration I need locally:
clusters:
- cluster:
server: http://localhost:8080
name: proxy
contexts:
- context:
cluster: proxy
name: proxy
current-context: proxy
Now the authentication should be based on a token, that I hoped I would be able to pass as part of the kubectl request header.
I tried multiple configuration, adding a user with a token in the kubeconfig such as
clusters:
- cluster:
server: http://localhost:8080
name: proxy
contexts:
- context:
cluster: proxy
user: robin
name: proxy
current-context: proxy
users:
- name: robin
user:
token: my-token
Or specifying a auth-provider such as
clusters:
- cluster:
server: http://localhost:8080
name: proxy
contexts:
- context:
cluster: proxy
user: robin
name: proxy
current-context: proxy
users:
- name: robin
user:
auth-provider:
config:
access-token: my-token
I even tried without any user, just by adding my token as part of the preferences, as all I want is to have the token in the header
clusters:
- cluster:
server: http://localhost:8080
name: proxy
contexts:
- context:
cluster: proxy
name: proxy
current-context: proxy
preferences:
token: my-token
But I was never able to see my-token as part of the request header on the proxy side. Dumping the request, all I got is:
GET /api/v1/namespaces/default/pods?limit=500 HTTP/1.1
Host: localhost:8080
Accept: application/json;as=Table;v=v1beta1;g=meta.k8s.io, application/json
Accept-Encoding: gzip
User-Agent: kubectl/v1.11.0 (darwin/amd64) kubernetes/91e7b4f
I am obviously missing something here, how can kubectl not pass the user information in its header? Let's say I do not have a proxy, how is the "kubectl -> kubernetes" token authentication working?
If someone has any experience at adding this kind of authentication layer between kubernetes and a client, I could use some help :)
Token credentials are only sent over TLS-secured connections. The server must be https://...

Resources