I am trying to setup Kubernetes on AWS EC2 there are writeups from this website which is wonderful
https://itnext.io/kubernetes-part-2-a-cluster-set-up-on-aws-with-aws-cloud-provider-and-aws-loadbalancer-f02c3509f2c2
I used the following config file for "kubeadm init"
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
extraArgs:
cloud-provider: "aws"
controllerManager:
extraArgs:
cloud-provider: "aws"
I got an error message saying that aws has been depreciated
I used the same file with the cloud-provider changed to "openstack"
Still I am getting errors; looks like the new version of Kubernetes I need to use another parameter "cloud-config" which has the configuration
Can anyone help me on how this needs to be done and how I can successfully configure a k8 cluster using EC2.
Referring from here You can use below config for AWS.
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "openstack"
cloud-config: "/etc/kubernetes/cloud.conf"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.13.0
apiServer:
extraArgs:
cloud-provider: "openstack"
cloud-config: "/etc/kubernetes/cloud.conf"
extraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"
controllerManager:
extraArgs:
cloud-provider: "openstack"
cloud-config: "/etc/kubernetes/cloud.conf"
extraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf
And then you can do kubeadm init --config=kubeadm-config.yml
https://kubernetes.io/blog/2020/02/07/deploying-external-openstack-cloud-provider-with-kubeadm/
You could avoid all of these if you just used Kops to install kubernetes on AWS.
Based on the suggestion provided I installed the Kubernetes cluster using kops. The Full instructions can be found here
Related
I have spring boot application with embedded hazelcast that I am trying to deploy on a shared Kubernetes platform. I want to use kubernetes API strategy for auto discovery. Can I do this without creating Cluster Roles and Cluster Role Bindings and have just Role and Role Binding created under my namespace. If yes, what would the rbac.yaml look like ?
Tried creating the following roles and role bindings but no auto discovery so far.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: hazelcast-role
namespace: dev
rules:
- apiGroups:
- ""
resources:
- endpoints
- pods
- nodes
- services
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: hazelcast-role-binding
namespace: dev
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: hazelcast-role
subjects:
- kind: ServiceAccount
name: default
namespace: dev
I'm trying to externalize my Spring Boot configuration using ConfigMaps in Kubernetes. I've read the docs and added the dependency on my pom.xml:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-fabric8-config</artifactId>
<version>2.1.3</version>
</dependency>
Set my spring.application.name as webapp and created a ConfigMap from a YAML file:
spring:
web:
locale: en_US
locale-resolver: fixed
Using this command:
kubectl create configmap webapp \
--namespace webapp-production \
--from-file=config.yaml
But when my application starts I get the following error:
Can't read configMap with name: [webapp] in namespace: [webapp-production]. Ignoring.
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://IP/api/v1/namespaces/webapp-production/configmaps/webapp. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. configmaps "webapp" is forbidden: User "system:serviceaccount:webapp-production:default" cannot get resource "configmaps" in API group "" in the namespace "webapp-production".
I couldn't find any more info in the docs on how to configure access other than this:
You should check the security configuration section. To access config maps from inside a pod you need to have the correct Kubernetes service accounts, roles and role bindings.
How can I grant the required permissions?
Finally I got it solved by creating an specific ServiceAccount and setting the deployment template spec.serviceAccountName:
apiVersion: v1
kind: ServiceAccount
metadata:
name: webapp-service-account
namespace: webapp-production
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: webapp-cluster-role
namespace: webapp-production
# Grant access to configmaps for external configuration
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: webapp-cluster-role-binding
roleRef:
kind: ClusterRole
name: webapp-cluster-role
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: webapp-service-account
namespace: webapp-production
Problem Statement:
I have deployed a spring boot app in gke under a namespace
when the app starts it uses a default gce sa credentials to authenticate.
what i did is created a gke service account and used iam policy binding to bind with a google service account and added workload identity user role
then annotated the gke sa by executing below 2 commands
issue is still my spring boot uses default gce sa credentials
Can someOne Please help me in resolving this.
I can see serviceAccountName is changed to new gke k8 SA and secret is also getting created and mounted.But app deployed are not using this Gke SA
Note: I am using Helsm chart for deployment
gcloud iam service-accounts add-iam-policy-binding \
--member serviceAccount:{projectID}.svc.id.goog[default/{k8sServiceAccount}] \
--role roles/iam.workloadIdentityUser \
{googleServiceAccount}
kubectl annotate serviceaccount \
--namespace default \
{k8sServiceAccount} \
iam.gke.io/gcp-service-account={googleServiceAccount}
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: helloworld
appVersion: {{ .Values.appVersion }}
name: helloworld
spec:
replicas: 1
selector:
matchLabels:
app: helloworld
template:
metadata:
labels:
app: helloworld
environment: {{ .Values.environment }}
spec:
serviceAccountName: {{ .Values.serviceAccountName }}
containers:
- name: helloworld
image: {{ .Values.imageSha }}
imagePullPolicy: Always
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
ports:
- containerPort: 8080
env:
- name: SPRING_CONFIG_LOCATION
value: "/app/deployments/config/"
volumeMounts:
- name: application-config
mountPath: "/app/deployments/config"
readOnly: true
volumes:
- name: application-config
configMap:
name: {{ .Values.configMapName }}
items:
- key: application.properties
path: application.properties
When you create a pod, if you do not specify a service account, it is automatically assigned the default service account in the same namespace. If you get the raw json or yaml for a pod you have created (for example, kubectl get pods/<podname> -o yaml), you can see the spec.serviceAccountName field has been automatically set.
You can access the API from inside a pod using automatically mounted service account credentials, as described in Accessing the Cluster [1]. The API permissions of the service account depend on the authorization plugin and policy [2] in use.
In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: build-robot
automountServiceAccountToken: false
...
In version 1.6+, you can also opt out of automounting API credentials for a particular pod:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
...
The pod spec takes precedence over the service account if both specify a automountServiceAccountToken value.
[1] https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/
[2] https://kubernetes.io/docs/reference/access-authn-authz/authorization/#authorization-modules
I setup a K8s cluster in EC2 and launched kubernetes dashboard by following these links:
https://github.com/kubernetes/dashboard
https://github.com/kubernetes/dashboard/wiki/Access-control
Here are commands I ran:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
As I setup a few IPs into security group for the EC2 instances, I assume only those IPs can access the dashboard, so no worry about the security here.
When I try to access the dashboard using:
http://<My_IP>:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Now what's the easiest way to access the dashboard?
I noticed there are several related questions, but seems no one really answer it.
Thanks a lot.
P.S. Just noticed some errors in Dashboard log. Something wrong with dashboard running?
You can use service with type:Loadbalancer and use loadBalancerSourceRanges: to limits access to your dashboard.
Have you done the ClusterRoleBinding for serviceaccount kubernetes-dashboard.? If not, apply the below yaml file changes, so that the serviceaccount will get cluster-admin roles to access all kubernetes resources.
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
It depends on what ServiceAccount or User you are using to connect to the kube-apiserver. If you want to have access without look for details of policy, literally give access to everything, your RBAC file can look similar to this:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: my-cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: <your-user-from-your-~/.kube/config>
Than pass a command:
kubectl apply -f <filename>
Second approach:
kubectl create clusterrolebinding my-cluster-admin --clusterrole=cluster-admin --user=<your-user-from-your-~/.kube/config>
You can also use a Group or ServiceAccount in User field. Look for official documentation about RBAC Authorization here.
Also what I found is great tutorial if you wanna take it step by step.
I am using golang lib client-go to connect to a running local kubrenets. To start with I took code from the example: out-of-cluster-client-configuration.
Running a code like this:
$ KUBERNETES_SERVICE_HOST=localhost KUBERNETES_SERVICE_PORT=6443 go run ./main.go results in following error:
panic: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
goroutine 1 [running]:
/var/run/secrets/kubernetes.io/serviceaccount/
I am not quite sure which part of configuration I am missing. I've researched following links :
https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins
https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/
But with no luck.
I guess I need to either let the client-go know which token/serviceAccount to use, or configure kubectl in a way that everyone can connect to its api.
Here's status of my kubectl though some commands results:
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://localhost:6443
name: docker-for-desktop-cluster
contexts:
- context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop
current-context: docker-for-desktop
kind: Config
preferences: {}
users:
- name: docker-for-desktop
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
$ kubectl get serviceAccounts
NAME SECRETS AGE
default 1 3d
test-user 1 1d
$ kubectl describe serviceaccount test-user
Name: test-user
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: test-user-token-hxcsk
Tokens: test-user-token-hxcsk
Events: <none>
$ kubectl get secret test-user-token-hxcsk -o yaml
apiVersion: v1
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0......=
namespace: ZGVmYXVsdA==
token: ZXlKaGJHY2lPaUpTVXpJMU5pSX......=
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: test-user
kubernetes.io/service-account.uid: 984b359a-6bd3-11e8-8600-XXXXXXX
creationTimestamp: 2018-06-09T10:55:17Z
name: test-user-token-hxcsk
namespace: default
resourceVersion: "110618"
selfLink: /api/v1/namespaces/default/secrets/test-user-token-hxcsk
uid: 98550de5-6bd3-11e8-8600-XXXXXX
type: kubernetes.io/service-account-token
This answer could be a little outdated but I will try to give more perspective/baseline for future readers that encounter the same/similar problem.
TL;DR
The following error:
panic: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
is most likely connected with the lack of token in the /var/run/secrets/kubernetes.io/serviceaccount location when using in-cluster-client-configuration. Also, it could be related to the fact of using in-cluster-client-configuration code outside of the cluster (for example running this code directly on a laptop or in pure Docker container).
You can check following commands to troubleshoot your issue further (assuming this code is running inside a Pod):
$ kubectl get serviceaccount X -o yaml:
look for: automountServiceAccountToken: false
$ kubectl describe pod XYZ
look for: containers.mounts and volumeMounts where Secret is mounted
Citing the official documentation:
Authenticating inside the cluster
This example shows you how to configure a client with client-go to authenticate to the Kubernetes API from an application running inside the Kubernetes cluster.
client-go uses the Service Account token mounted inside the Pod at the /var/run/secrets/kubernetes.io/serviceaccount path when the rest.InClusterConfig() is used.
-- Github.com: Kubernetes: client-go: Examples: in cluster client configuration
If you are authenticating to the Kubernetes API with ~/.kube/config you should be using the out-of-cluster-client-configuration.
Additional information:
I've added additional information for more reference on further troubleshooting when the code is run inside of a Pod.
automountServiceAccountToken: false
In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: go-serviceaccount
automountServiceAccountToken: false
In version 1.6+, you can also opt out of automounting API credentials for a particular pod:
apiVersion: v1
kind: Pod
metadata:
name: sdk
spec:
serviceAccountName: go-serviceaccount
automountServiceAccountToken: false
-- Kubernetes.io: Docs: Tasks: Configure pod container: Configure service account
$ kubectl describe pod XYZ:
When the servicAccount token is mounted, the Pod definition should look like this:
<-- OMITTED -->
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from go-serviceaccount-token-4rst8 (ro)
<-- OMITTED -->
Volumes:
go-serviceaccount-token-4rst8:
Type: Secret (a volume populated by a Secret)
SecretName: go-serviceaccount-token-4rst8
Optional: false
If it's not:
<-- OMITTED -->
Mounts: <none>
<-- OMITTED -->
Volumes: <none>
Additional resources:
Kubernetes.io: Docs: Reference: Access authn authz: Authentication
Just to make it clear, in case it helps you further debug it: the problem has nothing to do with Go or your code, and everything to do with the Kubernetes node not being able to get a token from the Kubernetes master.
In kubectl config view, clusters.cluster.server should probably point at an IP address that the node can reach.
It needs to access the CA, i.e., the master, in order to provide that token, and I'm guessing it fails to for that reason.
kubectl describe <your_pod_name> would probably tell you what the problem was acquiring the token.
Since you assumed the problem was Go/your code and focused on that, you neglected to provide more information about your Kubernetes setup, which makes it more difficult for me to give you a better answer than my guess above ;-)
But I hope it helps!