Spring config server unable to fetch properties from codecommit - spring-boot

I'm running into an issue using Kubernetes service accounts to grant access to a codecommit repository for a spring config server.
When AWSCodeCommitReadOnly is granted to the EKS cluster.worker-node role, the config server is able to successfully get the properties, however replicating this using service accounts causes the config server to throw the following error:
Cannot clone or checkout repository: https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/config-server-properties
A separate IAM role has been created with the CodeCommit policy, and this is being attached to a service account with the annotation:
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::accountnum:role/test-pod-iam-permissions
The iam role has a trusted entity for the eks cluster and the following condition:
system:serviceaccount:namespace:test-pod-iam-permissions
We've also created a clusterrole which should have access to all verbs/resources:
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
* [] [] [*]
and an associated binding:
Name: iam-permissions-binding
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: iam-permissions
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount test-pod-iam-permissions namespace
Following this documentation https://docs.aws.amazon.com/eks/latest/userguide/specify-service-account-role.html we seem to have ticked all the boxes, so we're not sure what could be missing.
The deployment has the service account added, and when we exec into the pod it shoes the IAM role ARN:
$ kubectl exec -n namespace config-service-pod env | grep AWS
AWS_DEFAULT_REGION=eu-west-1
AWS_ROLE_ARN=arn:aws:iam::accountnum:role/test-pod-iam-permissions
AWS_REGION=eu-west-1
AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token
Could a separate serviceaccount be overriding the permissions we're trying to grant here?
We've updated the config servers pom to use 1.11.623 for aws-java-sdk-core and added in a dependency for
aws-java-sdk-sts

Related

client-go get configMap issue

I am trying to create a simple deployment on Kubernetes using client-go. Following the example, I am creating the inClusterConfig for the client. Also, I have created a role and a roleBinding for deployment SA to get, list and watch ConfigMap objects.
When using the Client.Get(), I am getting
Get "https://10.96.0.1:443/api/v1/namespaces/default/configmaps": Access Denied
Tried to exec into the pod and with
curl https://10.96.0.1:443/api/v1/namespaces/default/configmaps with the token mounted in
/var/run/secrets/..../token i was able to get the CM
any idea ?
also have created corresponding role and rolebinding
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: got-dynamic-cm-reader-Role
subjects:
- kind: ServiceAccount
name: got
namespace: default
In the example that you have mentioned, the service account that has been created has only cluster viewer role, which is insufficient if you are trying to create resources.
Try binding an admin role binding to the service account. But, for production purposes, you should use a more granular approach on adding permissions to the service account.

Connect to dynamically create new cluster on GKE

I am using the cloud.google.com/go SDK to programmatically provision the GKE clusters with the required configuration.
I set the ClientCertificateConfig.IssueClientCertificate = true (see https://pkg.go.dev/google.golang.org/genproto/googleapis/container/v1?tab=doc#ClientCertificateConfig).
After the cluster is provisioned, I use the ca_certificate, client_key, client_secret returned for the same cluster (see https://pkg.go.dev/google.golang.org/genproto/googleapis/container/v1?tab=doc#MasterAuth). Now that I have the above 3 attributes, I try to generate the kubeconfig for this cluster (to be later used by helm)
Roughly, my kubeconfig looks something like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <base64_encoded_data>
server: https://X.X.X.X
name: gke_<project>_<location>_<name>
contexts:
- context:
cluster: gke_<project>_<location>_<name>
user: gke_<project>_<location>_<name>
name: gke_<project>_<location>_<name>
current-context: gke_<project>_<location>_<name>
kind: Config
preferences: {}
users:
- name: gke_<project>_<location>_<name>
user:
client-certificate-data: <base64_encoded_data>
client-key-data: <base64_encoded_data>
On running kubectl get nodes with above config I get the error:
Error from server (Forbidden): serviceaccounts is forbidden: User "client" cannot list resource "serviceaccounts" in API group "" at the cluster scope
Interestingly if I use the config generated by gcloud, the only change is in the user section:
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /Users/ishankhare/google-cloud-sdk/bin/gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
This configuration seems to work just fine. But as soon as I add client cert and client key data to it, it breaks:
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /Users/ishankhare/google-cloud-sdk/bin/gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
client-certificate-data: <base64_encoded_data>
client-key-data: <base64_encoded_data>
I believe I'm missing some details related to RBAC but I'm not sure what. Will you be able to provide me with some info here?
Also reffering to this question I've tried to only rely on Username - Password combination first, using that to apply a new clusterrolebinding in the cluster. But I'm unable to use just the username password approach. I get the following error:
error: You must be logged in to the server (Unauthorized)

Kubernetes Endpoint created for Kafka but not reflecting in POD

In Kubernetes cluster I have created Endpoint pointing to Kafka cluster. Endpoint created successfully.
Name - kafka
Endpoint - X.X.X.X:9092
In my Spring Boot application's deployment yaml I have kept environment variable BROKER_IP. For this environment variable I have pointed:
env:
- name: BROKER_IP
value: kafka
The POD is in Error state. In my bootstrap-server I am getting kafka and not the actual Endpoint that was created. Any thoughts?
UPDATE - Just tried kafka:9092 and it worked. So wondering does the ENDPOINT maps to IP only and not the Port? Is my understanding correct??
Is it possible that you forgot to create the Service object matching the Endpoints? Because you are providing the ip-port pairs yourself the Service would need to be selectorless.
This works for me:
kind: Endpoints
apiVersion: v1
metadata:
name: kafka
subsets:
- addresses: [{ip: "1.2.3.4"}]
ports: [{port: 9092}]
---
kind: Service
apiVersion: v1
metadata:
name: kafka
spec:
ports: [{port: 9092}]
Testing it:
$ kubectl run kafka-dns-test --image=busybox --attach --rm --restart=Never -- nslookup kafka
If you don't see a command prompt, try pressing enter.
Server: 10.96.0.10
Address: 10.96.0.10:53
Name: kafka.default.svc.cluster.local
Address: 10.96.220.40
Successful lookup, ignore extra *** Can't find xxx: No answer messages
Also, because there is a Service object you get some environment variables in your Pods (without having to declare them):
KAFKA_PORT='tcp://10.96.220.40:9092'
KAFKA_PORT_9092_TCP='tcp://10.96.220.40:9092'
KAFKA_PORT_9092_TCP_ADDR='10.96.220.40'
KAFKA_PORT_9092_TCP_PORT='9092'
KAFKA_PORT_9092_TCP_PROTO='tcp'
KAFKA_SERVICE_HOST='10.96.220.40'
KAFKA_SERVICE_PORT='9092'
But the most flexible way to use a Service is still to use the dns name (kafka in this case).

Connecting jaeger with elasticsearch backend storage on kubernetes cluster

I have a kubernetes cluster on google cloud platform, and on it, I have a jaeger deployment via development setup of jaeger-kubernetes templates
because my purpose is setup elasticsearch like backend storage, due to this, I follow the jaeger-kubernetes github documentation with the following actions
I've created the services via production setup options
Here are configured the URLs to access to elasticsearch server and username and password and ports
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/production-elasticsearch/configmap.yml
And here, there are configured the download of docker images of the elasticsearch service and their volume mounts.
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/production-elasticsearch/elasticsearch.yml
And then, at this moment we have a elasticsearch service running over 9200 and 9300 ports
kubectl get service elasticsearch [a89fbe2]
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 1h
I've follow with the creation of jaeger components using the kubernetes-jaeger production templates of this way:
λ bgarcial [~] → kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/jaeger-production-template.yml
deployment.extensions/jaeger-collector created
service/jaeger-collector created
service/zipkin created
deployment.extensions/jaeger-query created
service/jaeger-query created
daemonset.extensions/jaeger-agent created
λ bgarcial [~/workspace/jaeger-elastic] at master ?
According to the Jaeger architecture, the jaeger-collector and jaeger-query services require access to backend storage.
And so, these are my services running on my kubernetes cluster:
λ bgarcial [~/workspace/jaeger-elastic] at  master ?
→ kubectl get services [baefdf9]
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 3h
jaeger-collector ClusterIP 10.55.253.240 <none> 14267/TCP,14268/TCP,9411/TCP 3h
jaeger-query LoadBalancer 10.55.248.243 35.228.179.167 80:30398/TCP 3h
kubernetes ClusterIP 10.55.240.1 <none> 443/TCP 3h
zipkin ClusterIP 10.55.240.60 <none> 9411/TCP 3h
λ bgarcial [~/workspace/jaeger-elastic] at  master ?
I going to configmap.yml elastic search file kubectl edit configmap jaeger-configuration command in order to try to edit it in relation to the elasticsearch URLs endpoints (may be? ... At this moment I am supossing that this is the next step ...)
I execute it:
λ bgarcial [~] → kubectl edit configmap jaeger-configuration
And I get the following edit entry:
apiVersion: v1
data:
agent: |
collector:
host-port: "jaeger-collector:14267"
collector: |
es:
server-urls: http://elasticsearch:9200
username: elastic
password: changeme
collector:
zipkin:
http-port: 9411
query: |
es:
server-urls: http://elasticsearch:9200
username: elastic
password: changeme
span-storage-type: elasticsearch
kind: ConfigMap
metadata:
creationTimestamp: "2018-12-27T13:24:11Z"
labels:
app: jaeger
jaeger-infra: configuration
name: jaeger-configuration
namespace: default
resourceVersion: "1387"
selfLink: /api/v1/namespaces/default/configmaps/jaeger-configuration
uid: b28eb5f4-09da-11e9-9f1e-42010aa60002
Here ... do I need setup our own URLs to collector and query services, which will be connect wiht elasticsearch backend service?
How to can I setup the elasticsearch IP address or URLs here?
In the jaeger components, the query and collector need access to storage, but I don't know what is the elastic endpoint ...
Is this server-urls: http://elasticsearch:9200 a correct endpoint?
I am starting in the kubernetes and DevOps world, and I appreciate if someone can help me in the concepts and point me in the right address in order to setup jaeger and elasticsearch as a backend storage.
When you are accessing the service from the pod in the same namespace you can use just the service name.
Example:
http://elasticsearch:9200
If you are accessing the service from the pod in the different namespace you should also specify the namespace.
Example:
http://elasticsearch.mynamespace:9200
http://elasticsearch.mynamespace.svc.cluster.local:9200
To check in what namespace the service is located, use the following command:
kubectl get svc --all-namespaces -o wide
Note: Changing ConfigMap does not apply it to deployment instantly. Usually, you need to restart all pods in the deployment to apply new ConfigMap values. There is no rolling-restart functionality at the moment, but you can use the following command as a workaround:
(replace deployment name and pod name with the real ones)
kubectl patch deployment mydeployment -p '{"spec":{"template":{"spec":{"containers":[{"name":"my-pod-name","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}'

Fail to connect to kubectl from client-go - /serviceaccount/token: no such file

I am using golang lib client-go to connect to a running local kubrenets. To start with I took code from the example: out-of-cluster-client-configuration.
Running a code like this:
$ KUBERNETES_SERVICE_HOST=localhost KUBERNETES_SERVICE_PORT=6443 go run ./main.go results in following error:
panic: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
goroutine 1 [running]:
/var/run/secrets/kubernetes.io/serviceaccount/
I am not quite sure which part of configuration I am missing. I've researched following links :
https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins
https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/
But with no luck.
I guess I need to either let the client-go know which token/serviceAccount to use, or configure kubectl in a way that everyone can connect to its api.
Here's status of my kubectl though some commands results:
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://localhost:6443
name: docker-for-desktop-cluster
contexts:
- context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop
current-context: docker-for-desktop
kind: Config
preferences: {}
users:
- name: docker-for-desktop
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
$ kubectl get serviceAccounts
NAME SECRETS AGE
default 1 3d
test-user 1 1d
$ kubectl describe serviceaccount test-user
Name: test-user
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: test-user-token-hxcsk
Tokens: test-user-token-hxcsk
Events: <none>
$ kubectl get secret test-user-token-hxcsk -o yaml
apiVersion: v1
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0......=
namespace: ZGVmYXVsdA==
token: ZXlKaGJHY2lPaUpTVXpJMU5pSX......=
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: test-user
kubernetes.io/service-account.uid: 984b359a-6bd3-11e8-8600-XXXXXXX
creationTimestamp: 2018-06-09T10:55:17Z
name: test-user-token-hxcsk
namespace: default
resourceVersion: "110618"
selfLink: /api/v1/namespaces/default/secrets/test-user-token-hxcsk
uid: 98550de5-6bd3-11e8-8600-XXXXXX
type: kubernetes.io/service-account-token
This answer could be a little outdated but I will try to give more perspective/baseline for future readers that encounter the same/similar problem.
TL;DR
The following error:
panic: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
is most likely connected with the lack of token in the /var/run/secrets/kubernetes.io/serviceaccount location when using in-cluster-client-configuration. Also, it could be related to the fact of using in-cluster-client-configuration code outside of the cluster (for example running this code directly on a laptop or in pure Docker container).
You can check following commands to troubleshoot your issue further (assuming this code is running inside a Pod):
$ kubectl get serviceaccount X -o yaml:
look for: automountServiceAccountToken: false
$ kubectl describe pod XYZ
look for: containers.mounts and volumeMounts where Secret is mounted
Citing the official documentation:
Authenticating inside the cluster
This example shows you how to configure a client with client-go to authenticate to the Kubernetes API from an application running inside the Kubernetes cluster.
client-go uses the Service Account token mounted inside the Pod at the /var/run/secrets/kubernetes.io/serviceaccount path when the rest.InClusterConfig() is used.
-- Github.com: Kubernetes: client-go: Examples: in cluster client configuration
If you are authenticating to the Kubernetes API with ~/.kube/config you should be using the out-of-cluster-client-configuration.
Additional information:
I've added additional information for more reference on further troubleshooting when the code is run inside of a Pod.
automountServiceAccountToken: false
In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: go-serviceaccount
automountServiceAccountToken: false
In version 1.6+, you can also opt out of automounting API credentials for a particular pod:
apiVersion: v1
kind: Pod
metadata:
name: sdk
spec:
serviceAccountName: go-serviceaccount
automountServiceAccountToken: false
-- Kubernetes.io: Docs: Tasks: Configure pod container: Configure service account
$ kubectl describe pod XYZ:
When the servicAccount token is mounted, the Pod definition should look like this:
<-- OMITTED -->
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from go-serviceaccount-token-4rst8 (ro)
<-- OMITTED -->
Volumes:
go-serviceaccount-token-4rst8:
Type: Secret (a volume populated by a Secret)
SecretName: go-serviceaccount-token-4rst8
Optional: false
If it's not:
<-- OMITTED -->
Mounts: <none>
<-- OMITTED -->
Volumes: <none>
Additional resources:
Kubernetes.io: Docs: Reference: Access authn authz: Authentication
Just to make it clear, in case it helps you further debug it: the problem has nothing to do with Go or your code, and everything to do with the Kubernetes node not being able to get a token from the Kubernetes master.
In kubectl config view, clusters.cluster.server should probably point at an IP address that the node can reach.
It needs to access the CA, i.e., the master, in order to provide that token, and I'm guessing it fails to for that reason.
kubectl describe <your_pod_name> would probably tell you what the problem was acquiring the token.
Since you assumed the problem was Go/your code and focused on that, you neglected to provide more information about your Kubernetes setup, which makes it more difficult for me to give you a better answer than my guess above ;-)
But I hope it helps!

Resources