Adding authentication proxy in front of kubernetes - proxy

I'm adding a proxy in front of kubernetes API in order to authenticate users (among other actions) with a homemade authentication system.
I've modified my kube configuration to have kubectl hitting the proxy. The proxy has its own kubeconfig with a valid certificate-authority-data, so I don't need any credentials on my side.
So far this is working fine, here is the minimum configuration I need locally:
clusters:
- cluster:
server: http://localhost:8080
name: proxy
contexts:
- context:
cluster: proxy
name: proxy
current-context: proxy
Now the authentication should be based on a token, that I hoped I would be able to pass as part of the kubectl request header.
I tried multiple configuration, adding a user with a token in the kubeconfig such as
clusters:
- cluster:
server: http://localhost:8080
name: proxy
contexts:
- context:
cluster: proxy
user: robin
name: proxy
current-context: proxy
users:
- name: robin
user:
token: my-token
Or specifying a auth-provider such as
clusters:
- cluster:
server: http://localhost:8080
name: proxy
contexts:
- context:
cluster: proxy
user: robin
name: proxy
current-context: proxy
users:
- name: robin
user:
auth-provider:
config:
access-token: my-token
I even tried without any user, just by adding my token as part of the preferences, as all I want is to have the token in the header
clusters:
- cluster:
server: http://localhost:8080
name: proxy
contexts:
- context:
cluster: proxy
name: proxy
current-context: proxy
preferences:
token: my-token
But I was never able to see my-token as part of the request header on the proxy side. Dumping the request, all I got is:
GET /api/v1/namespaces/default/pods?limit=500 HTTP/1.1
Host: localhost:8080
Accept: application/json;as=Table;v=v1beta1;g=meta.k8s.io, application/json
Accept-Encoding: gzip
User-Agent: kubectl/v1.11.0 (darwin/amd64) kubernetes/91e7b4f
I am obviously missing something here, how can kubectl not pass the user information in its header? Let's say I do not have a proxy, how is the "kubectl -> kubernetes" token authentication working?
If someone has any experience at adding this kind of authentication layer between kubernetes and a client, I could use some help :)

Token credentials are only sent over TLS-secured connections. The server must be https://...

Related

Connection to KV Vault is only working through a WireMock

If I'm trying to connect to my Vault Engine, I get a Error 503 Service Unavailable. If I'm sending the call to a local WireMock which redirects the call with less headers to the same address, it works. Spring Cloud Version is 3.1.1
Cannot enhance VaultToken to a LoginToken: Token self-lookup failed: 503 <html><body><h1>503 Service Unavailable</h1>
The bootstrap config looks like this
spring:
cloud:
vault:
scheme: https
host: <uri-to-the-vault>
port: 443
uri: <uri-to-the-vault>
authentication: token
token: "TOKEN"
enabled: true
kv:
enabled: true
backend: <backend-name>
profiles: <profile-name>
application-name: <application-name>
I tried to setup a connection through WireMock to look if the call is incorrect. I tried to redirect the call. Wiremock takes the call and sends it just to the same base url written above but only with the token as a header and it works. Postman takes the same call and it works aswell.

How to Generate Bearer Token From Kubeconfig File Programmatically On Outside of Kubernetes With Golang

I am trying to create a cli tool for kubernetes. I need to generate Bearer Token for communicating with kubernetes API. How can I generate the token from Kubeconfig File? I do not want to use external library or kubectl.
Here is example Kubeconfig File:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01USXhNREU1TVRReU0xb1hEVE13TVRJd09ERTVNVFF5TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTUE4CmhEcDBvRVUzNTFFTEVPTzZxd3dUQkZ2U2ZxWWlGOE0yR0VZMXNLRFZ0MUNyL3czOS83QkhCYi9NOE5XaW9vdlQKZ2hsZlN2TXhsaTBRUVRSQmd5NHp2ZkNveXdBWkg0dWphYTBIcW43R2tkbUdVUC94RlZoWEIveGhmdHY5RUFBNwpMSW1CT3dZVHJ6ajRtS0JxZ3RTenhmVm5hN2J2U2oxV203bElYaTNaSkZzQmloSFlwaXIwdFZEelMzSGtEK2d0Cno1RkhOU0dnSS9MTlczOWloTU1RQ0g0ZFhtQVVueGFmdFdwUlRQOXFvSHJDWTZxVlNlbEVBYm40UWZVZ2ZUaDEKMUNhdW01bllOUjlDZ3lPOStNY0hXMTdYV0c4NGdGV3p6VUxPczVXbUo0VVY4RjdpdkVhMVJlM2Q3VkpKUEF6VwpCME4rWFFmcXg5UTArRWlXWklVQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZBV0p0Y2RLYjRRQWU2ekw4NzdvN3FQNVVWNWZNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCYWt3bE1LL2VmWUpyNlVlWEpkenBURmRaS0lYbWFEaWxhZ3ZNOGNkci9nVjJlWVlEdgpRY3FvcUwvNS95U3Y1T2ZpR0MrU25nUXhZMHp0a0VVQm04N1NOR1dYLzd1VlUwbytVV2tzZERLR2JhRmxIVE9PCmFBR3dndEZ4T1YzeTF1WnZJVm8vbW12WTNIMTBSd29uUE8yMU5HMEtRWkRFSStjRXFFb1JoeDFtaERCeGVSMUgKZzdmblBJWTFUczhWM2w0SFpGZ015anpwVWtHeUNjMVYxTDk5Vk55UHJISEg0L1FibVM5UWdkNUNWZXNlRm9HaApOVkQ4ZHRjUmpWM2tGYVVJelJ6a3lRMG1FMXk1RXRXMWVZZnF4QnAxNUN3NnlSenNWMzcrdlNab0pSS1FoNGw4CjB1b084cFhCMGQ4V1hMNml0UWp2ZjJOQnBnOU1nY0Q2QzEvZgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://192.168.1.18:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURFekNDQWZ1Z0F3SUJBZ0lJYldUcHpDV25zTVl3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURFeU1UQXhPVEUwTWpOYUZ3MHlNVEV5TVRBeE9URTBNalZhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTBGT09JcnZiTGd1SWJmVXUKd29BaG5SaktEQVFCdkp3TlliSWZkSlNGSFBhY1ljbmVUcUVSVXFZeEs4azFHRytoR0FDTlFPb2VNV3Q1anNjRwpuN0FFdHhscUJQUzNQMzBpMVhLSmZnY2Q1OXBxaG1kOVFIdFNOVTlUTVlaM2dtY0x4RGl1cXZFRGI0Q042UTl6CkI3Yk5iUDE4Y3pZdHVwbUJrY2plMFF1ZEd2dktHcWhaY1NkVFZMT3ErcTE0akM4TTM5UmgzdDk1ZEM2aWRYaUsKbWE3WGs5YnJtalJnWDZRVUJJc0xwTnkvc3dJaUFiUTlXSm1YL2VkdHhYTGpENllNK1JzQ0JkbGc5MEhhcURqdgpKSlcwQ2g4cDJkV1ZwalQrWjBMd2ZnUENBN1YzS1o4YWdldHhwQ0xQcmxlOTdnRStVM1BKbXJVY0lBaVJlbzFoCmsvOXVqUUlEQVFBQm8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0h3WURWUjBqQkJnd0ZvQVVCWW0xeDBwdmhBQjdyTXZ6dnVqdW8vbFJYbDh3RFFZSktvWklodmNOQVFFTApCUUFEZ2dFQkFDeXVKazdjdVppdzhmQW5teUdSa0trdFAzUE5LUnBCdDVnUVdjUzJuRFUrTmpIMjh1MmpGUDQ5Cm1xbjY1SGpmQU9iOVREUUlRcUtZaWdjYTViOXFYRXlDWHZEN1k1SXJ4RmN3VnEvekdZenFYWjVkR0srUnlBUlQKdm0rQzNaTDV0N2hJc1RIYWJ0SkhTYzhBeFFPWEdTd1h0YkJvdHczd2ZuSXB0alY1SG1VYjNmeG9KQUU4S1hpTgpHcXZ5alhpZHUwc1RtckszOHM5ZjZzTFdyN1lOQTlKNEh4ditkNk15ZFpSWDhjS3VRaFQzNDFRcTVEVnRCT1BoCjBpb1Mwa0JEUDF1UWlIK0tuUE9MUmtnYXAyeDhjMkZzcFVEY1hJQlBHUDBPR1VGNWFMNnhIa2NsZ0Q5eHFkU0cKMVlGVjJUamtjNHN2U1hMSkt1cmU1S2IrODcyQlZWWT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMEZPT0lydmJMZ3VJYmZVdXdvQWhuUmpLREFRQnZKd05ZYklmZEpTRkhQYWNZY25lClRxRVJVcVl4SzhrMUdHK2hHQUNOUU9vZU1XdDVqc2NHbjdBRXR4bHFCUFMzUDMwaTFYS0pmZ2NkNTlwcWhtZDkKUUh0U05VOVRNWVozZ21jTHhEaXVxdkVEYjRDTjZROXpCN2JOYlAxOGN6WXR1cG1Ca2NqZTBRdWRHdnZLR3FoWgpjU2RUVkxPcStxMTRqQzhNMzlSaDN0OTVkQzZpZFhpS21hN1hrOWJybWpSZ1g2UVVCSXNMcE55L3N3SWlBYlE5CldKbVgvZWR0eFhMakQ2WU0rUnNDQmRsZzkwSGFxRGp2SkpXMENoOHAyZFdWcGpUK1owTHdmZ1BDQTdWM0taOGEKZ2V0eHBDTFBybGU5N2dFK1UzUEptclVjSUFpUmVvMWhrLzl1alFJREFRQUJBb0lCQUEvclVxRTAyYnJiQnNIZwpTb0p5YUI4cEZjZDFSdXl5d0JNSEdZQS9HU3p0YTJYTmx6OUs3NWZ4T3pDdFgzRk9sbkRQR2Z3cjU4Sy9BN3IxCldudzVaeUxXdmxOQ24vNHFBYzl0d1RQd04walFWL09OVlBUb2Q0KzdVQkFveGxrZ3ByV0gzMUVRdWNKN2dGeWUKNFp0bFRLMVhjWHNjV01JNW1MMGJMR3V0QjRSWU5meHAwZ1AxekJ6Z2FLYjVGK2xVcFdHZ2w1dHNHay9ncm9uSwpUVkVCQmtBT0lyU0pFemc5YUJ2emJMS0h3TnZlL1QrVEdJTGVZalpRYVkxL1lLN2JpbFVkaFlQOGI2OWhxbFZnClVxc0hpRjVXNzYzenMrdXl5azNtUU1yblJKQ2ZUWDNTRWhOVm1BdTl0TXh2eE1BRk9QT1lLb3FPb25LNHdrZWwKU21HUHBnRUNnWUVBNjJhMjdWdlgrMVRlellIWmZWSW8rSi8welVmZERqZ0MvWG1zTlBWdkhXdXlkOUVRQ1JXKwpOS1FpOGdMWmNUSEpWU3RidkpRVENSQUdCL0wzM09SUTI5Tm1KNnVVUWNNR0pBSzhwckdLKytwTXF3NHRPdzMvCkhDblVQZGVaSGFVVVFnODVJeWMrbmg5QnFQWndXclk3REZEbENUOXI5cVZJN1RvS0ptd2RjdlVDZ1lFQTRvNVUKZDZXdFpjUk5vV041UUorZVJkSDRkb2daQnRjQ0ExTGNWRDdxUzYrd0s2eTdRU05zem9wWTc1MnlLWU91N2FCWQo2RlhDQVRHaG0ranN6ZE14allrV2ROdGxwbDZ4ejZRZmN6ZWgydjVUQVdpRkZyMTlqU1RkLzNrRlFDNytpeUQyCnZRSHpacXZZSUhtQ3VleldHRFJrVVB2dzk1dTFranphcEZCRHZqa0NnWUJXZUpLMXVra3FiOUN3V1FTVmZuckMKYWErNVFLNjVMR1ljeW5jeHRQNnVKZ09XODlzYUd6eVZoYjI0Zk1kM1J6eVg1cWQ2TEVLWno2TUhoSDc4UzNwUQpaZVZlcVM1NndiTWR3MHVkU0JhdjF5OTJubXlMQnVjeFowUXB1MnJwY3R4d0w3dGphR1VlSElrNEVkN1AwNlQ1Ckx6WVRJWkw5TlZZR25vMWY4OU1WaVFLQmdRQ2RKQjNnYzNGSEloYTZkM1cxNWtEd3FzZ001eTk4dUF0MFpMZmcKVTFkTnNnbWU4WXRjamdhOVorWnlKVTViVHpRNUxEd2V3c1R5OFFyb1NuSmQvVHZrc1E1N2RXWVhOSjFlcWJjSwp3cTZvYURrSXhBZDBFM0VQUW1BZEFFTXRGcXVGc3hLUlhOWUlBKysvN3FoRzc4ZzhON0xSSFQ4eGI3Wk1QWnRsCjF5cDF1UUtCZ0VGemtmR3VzeGxJU2xOY1VDUGFTUUx6bTZqYmdjdUlXcjZaN053R01pVHM3b2x5TnQrdnpiRnMKbnk5d1pnbHlsS0M2NjcreXpIa0tkbnZBdWRuS290bDhybzRCOVhjUHhGWDJ5NnpwZWIxWS91STZpVzl4Y2NSNQozbUlVS2QrOGdMczRrTUttL2dXYjZxTHdPZ3pjQWJIbTV6SVhBMXQ5TUJWYlE2ZHEvMlZDCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
I need to generate Bearer Token for communicating with kubernetes API
You cannot ”generate” these tokens. They are issued by the control plane and signed with the private key that the control plane holds. It would be a security hole if you could generate these on the client side.

How to configure kong-api to communicate other spring Microservice

I am just started with Kong API with One API
I am able to run kong api locally using its official docker image available.
And on other side I am having another Spring-Boot microservice locally running inside same Docker engine.
Problem : What configuration needs in kong api yaml file so that I can connect to my spring-boot microservice ?
My kong -api yaml file
services:
- name: control-service-integration
url: http://localhost:8080/
plugins:
- name: oneapi
config:
edgemicro_proxy: edgemicro_demo_v0
add_application_id_header: true
authentication:
apikey:
header_name: "x-api-key"
upstream_auth:
basic_auth:
username: username
password: password
routes:
- name: control-service-route
request_buffering: false
response_buffering: false
paths:
- /edgemicro-demo-v0
From kon-one api service i am getting always 502 Bad Gateway error.
Let me know if anything information required.
I found the solution for this
in above YAML
services:
- name: control-service-integration
url: http://localhost:8080/
add this value in-front of url section http://host.docker.internal:8080/ after doing lot of trials and errors finally now I am able to connect my app which is running on host.

Keycloak: Invalid token issuer when running as docker service

I have a problem:
WWW-Authenticate Bearer realm="test", error="invalid_token", error_description="Invalid token issuer. Expected 'http://keycloak:8080/auth/realms/test', but was 'http://localhost:8080/auth/realms/test'"
My settings:
application.yml
keycloak:
realm: test
resource: api
auth-server-url: http://keycloak:8080/auth
ssl-required: external
autodetect-bearer-only: true
cors: true
principal-attribute: preferred_username
credentials:
secret: 2b553733-8d5f-4276-8ace-17112ac7ac20
docker-compose.yml
keycloak:
image: jboss/keycloak:10.0.0
environment:
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
ports:
- "8080:8080"
networks:
- net
Auth url: http://localhost:8080/auth/realms/test/protocol/openid-connect/auth
Token url: http://localhost:8080/auth/realms/test/protocol/openid-connect/token
I understand why the problem exists, but I don`t understand how to fix it.
Keycloak's Default Hostname Provider (https://www.keycloak.org/docs/latest/server_installation/#default-provider) has a property called frontendURL which should be set as the public URL on which Keycloak is exposed.
Setting frontendURL ensures that all front-channel URLs, like issuer, authorization_endpoint use the configured value as hostname in the URLs and back-channel URLs keep using hostname in the request.
I added 127.0.0.1 keycloak in hosts file and used http://keycloak:8080/auth/realms/*** url to get the token. Now the JWT token contained the issuer as keycloak instead of localhost. I verified the token using jwt.io website. This resolved the mismatch in token issuer.

Connect to dynamically create new cluster on GKE

I am using the cloud.google.com/go SDK to programmatically provision the GKE clusters with the required configuration.
I set the ClientCertificateConfig.IssueClientCertificate = true (see https://pkg.go.dev/google.golang.org/genproto/googleapis/container/v1?tab=doc#ClientCertificateConfig).
After the cluster is provisioned, I use the ca_certificate, client_key, client_secret returned for the same cluster (see https://pkg.go.dev/google.golang.org/genproto/googleapis/container/v1?tab=doc#MasterAuth). Now that I have the above 3 attributes, I try to generate the kubeconfig for this cluster (to be later used by helm)
Roughly, my kubeconfig looks something like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <base64_encoded_data>
server: https://X.X.X.X
name: gke_<project>_<location>_<name>
contexts:
- context:
cluster: gke_<project>_<location>_<name>
user: gke_<project>_<location>_<name>
name: gke_<project>_<location>_<name>
current-context: gke_<project>_<location>_<name>
kind: Config
preferences: {}
users:
- name: gke_<project>_<location>_<name>
user:
client-certificate-data: <base64_encoded_data>
client-key-data: <base64_encoded_data>
On running kubectl get nodes with above config I get the error:
Error from server (Forbidden): serviceaccounts is forbidden: User "client" cannot list resource "serviceaccounts" in API group "" at the cluster scope
Interestingly if I use the config generated by gcloud, the only change is in the user section:
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /Users/ishankhare/google-cloud-sdk/bin/gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
This configuration seems to work just fine. But as soon as I add client cert and client key data to it, it breaks:
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /Users/ishankhare/google-cloud-sdk/bin/gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
client-certificate-data: <base64_encoded_data>
client-key-data: <base64_encoded_data>
I believe I'm missing some details related to RBAC but I'm not sure what. Will you be able to provide me with some info here?
Also reffering to this question I've tried to only rely on Username - Password combination first, using that to apply a new clusterrolebinding in the cluster. But I'm unable to use just the username password approach. I get the following error:
error: You must be logged in to the server (Unauthorized)

Resources