unable to access kubernetes dashboard via token - dashboard
I have setup a kubernetes using kubeadm v1.8.5
Setup a dashboard using:
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.8.0/src/deploy/recommended/kubernetes-dashboard.yaml`
kubectl create -f kubernetes-dashboard.yaml
kubectl create -f kubernetes-dashboard-admin.rbac.yaml
Then setup kubectl proxy, using http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ as recommended.
When I am trying to login using kubernetes-dashboard-admin token. Token was received by using the command:
kubectl -n kube-system get secret | grep -i dashboard-admin | awk '{print $1}' | xargs -I {}
kubectl -n kube-system describe secret {}
Here comes my problem: I CANT access the dashboard via token, when I paste the token and click "Signin" botton, nothing happened. And I get nothing in my log[using tail -f /var/log/messages and journalctl -xeu kubelet]. I am a newbee on k8s, maybe someone could tell me where the log is?
Here are my k8s cluster-info:
[root#k8s-1 pki]# kubectl cluster-info
Kubernetes master is running at https://172.16.1.15:6443
KubeDNS is running at https://172.16.1.15:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy
kubernetes-dashboard is running at https://172.16.1.15:6443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root#k8s-1 pki]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-1 Ready master 4d v1.8.5
k8s-2 Ready <none> 4d v1.8.5
k8s-3 Ready <none> 4d v1.8.5
[root#k8s-1 pki]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-k8s-1 1/1 Running 2 4d
kube-system kube-apiserver-k8s-1 1/1 Running 2 4d
kube-system kube-controller-manager-k8s-1 1/1 Running 1 4d
kube-system kube-dns-545bc4bfd4-94vxx 3/3 Running 3 4d
kube-system kube-flannel-ds-97frd 1/1 Running 2 4d
kube-system kube-flannel-ds-bl9tp 1/1 Running 2 4d
kube-system kube-flannel-ds-bn9hp 1/1 Running 1 4d
kube-system kube-proxy-9ncdm 1/1 Running 0 4d
kube-system kube-proxy-qjm9k 1/1 Running 1 4d
kube-system kube-proxy-rknz4 1/1 Running 0 4d
kube-system kube-scheduler-k8s-1 1/1 Running 2 4d
kube-system kubernetes-dashboard-7486b894c6-tszq9 1/1 Running 0 2h
The kubernetes-dashboard-admin-rbac.yaml is:
[root#k8s-1 dashboards]# cat kubernetes-dashboard-admin.rbac.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
name: kubernetes-dashboard-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard-admin
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard-admin
namespace: kube-system
Any suggestions? Thank you!!!
Try connect with https, I have same problem, and this works for me
Kubernetes Manual:
NOTE: Dashboard should not be exposed publicly using kubectl proxy
command as it only allows HTTP connection. For domains other than
localhost and 127.0.0.1 it will not be possible to sign in. Nothing
will happen after clicking Sign in button on login page. Logging in is
only available when accessing Dashboard over HTTPS or when domain is
either localhost or 127.0.0.1. It's done this way for security
reasons. Closing as this works as intended.
Try This token (output):
kubectl -n kube-system get secret |grep kubernetes-dashboard-token |cut -f1 -d ' ' | xargs kubectl -n kube-system describe secret
if doesn't work try/test login with this token (output) :
kubectl -n kube-system get secret |grep namespace-controller-token |cut -f1 -d ' ' | xargs kubectl -n kube-system describe secret
Good luck..
You should create an admin user first and add the cluster-admin clusterrolebinding to it:
Use these files admin-user.yaml and admin-user-clusterrolebinding.yaml to create the admin user with the cluster-admin clusterrolebinding:
[root#k8s-1 kubernetes-via-kubeadm]# kubectl create -f admin-user.yaml
serviceaccount "admin-user" created
[root#k8s-1 kubernetes-via-kubeadm]# kubectl create -f admin-user-clusterrolebinding.yaml
clusterrolebinding "admin-user" created
To get the token for this admin-user:
[root#k8s-1 kubernetes-via-kubeadm]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep ^token: | sed 's/token:[ ]*/Token:\n/'
Token:
eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLW1oNzIyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIwNWM0ZDZmZC0yZjYyLTExZTgtYTMxNi1jMDNmZDU2MmJiNzciLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.butKxegADx3JQvKpn9Prf7RL_SoxaEyi_scYOvXurm4BAwEj8zfC9a7djqQ9mBtd5cQHlljvMb-3qFc6UPOzAwR8fc5khk-nAkH-5XeahpT8WsyxMcKxqLuyAg8gh4ZtMKvBPk9kOWDtyRBzAeGkisbLxr43ecKO71F5G8D7HR2UGSm-x4Pvhq0uqj8GyIcHw902Ti92BPuBRf-SyTl8uDCQJSDkS5Tru5w0p82borNuVXd1mmDwuI87ApQrqXTY9rbJ61m8iTr0kKJBqw5bHAUAhxwAVtVEKQNNKT6cxWp1FlhHbNkM9bhcj1qj8bN1QCMjPWlWKj7NkPbbBAJthQ
You can use the token to login to your kubernetes-dashboard.
I faced this problem recently after upgrading k8s version to 1.16. Normally, I was able to access dashboard on local without any login but after that upgrade, it started to open login page first and even though I used a valid token, it didn't let me into dashboard. (There was no response, the page was just stuck)
In order to resolve the issue firstly I removed dashboard related resources.
kubectl delete clusterrolebinding kubernetes-dashboard
Then I deployed the newest dashboard version with the following command.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
After the above steps, I run proxy command again and this time, by entering the token, it proceeded with a dashboard page.
Edit: If you get cluster role related error and can't open any actual content, you may need to run the following commands:
kubectl delete clusterrolebinding kubernetes-dashboard
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard --user=clusterUser
Run below two command :
sym#symserver:~/Downloads$ token=$(microk8s.kubectl -n kube-system get secret | grep default-token | cut -d " " -f1)
sym#symserver:~/Downloads$ microk8s.kubectl -n kube-system describe secret $token
above command will generate token for dashboard access
You should be able to access and login into the dashboard from the assigned cluster IP address. To get the cluster IP just execute
kubectl get svc -n kube-system kubernetes-dashboard
and point your browser to this address (https).
From my other answer:
Get the service token
$ kubectl -o json get secret k8s-dashboard-sa-token-XXXXX | jq -r '.data.token' | base64 -d
eyJhbGci ... sjcuNA8w
Related
How to use CURL response in Kubernates CRON Jobs
We have a set of spring boot applications deployed in the Kubernetes cluster. For a few of them, we have designed the corn jobs which get triggered at the required frequency which is working fine in which we do hit our specific internal API that has been developed. The requirement is to generate a token using the API and pass the generated token as authentication. To do the same I am thinking of hitting the API using curl but not sure how I can use the response from curl and use that token to pass as an Authorization header in the subsequent curl request. Along with token, I do also need to send API Keys that we have but I am searching for a way to store it in an encrypted way and decode it in cron job before making an API hit. apiVersion: batch/v1 kind: CronJob metadata: name: api-cron spec: # At 08:00 on Tuesday schedule: "0 08 * * 2" jobTemplate: spec: template: spec: restartPolicy: Never containers: - command: ["/bin/sh", "-c"] env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace args: - | curl -s -i \ -X POST "http://application.$POD_NAMESPACE.svc.cluster.local/endpoint" \ -d "" name: curl image: curlimages/curl:7.80.0
Write a shell script and run it as container CMD or entrypoint. include the logic in the shell script.
Accessing environment variables from a pod
I wrote golang program which fetches values from environment variable set in my system using export var name = somevalue. cloudType = os.Getenv("CLOUD_TYPE") clusterRegion = os.Getenv("CLUSTER_REGION") clusterType = os.Getenv("CLUSTER_TYPE") clusterName = os.Getenv("CLUSTER_NAME") clusterID = os.Getenv("CLUSTER_ID") As mentioned above my program tries to fetch values from env var set in system using getenv func.The program is working good if run it and fetching values from env variables. But When I tried building a image and running it inside a pod it was able to fetch values from the env var. It is giving empty values. Is there any way to access the local env var from the pod?
Make a yaml file like this to define a config map apiVersion: v1 data: CLOUD_TYPE: "$CLOUD_TYPE" CLUSTER_REGION: "$CLUSTER_REGION" CLUSTER_TYPE: "$CLUSTER_TYPE" CLUSTER_NAME: "$CLUSTER_NAME" CLUSTER_ID: "$CLUSTER_ID" kind: ConfigMap metadata: creationTimestamp: null name: foo Ensure your config vars are set then apply it to your cluster, with env substitution first envsubst < foo.yaml | kubectl apply -f Then in the pod definition use the config map spec: containers: - name: mypod envFrom: - configMapRef: name: foo
It seems you set the env var not in the image. First, you need ensure that you set up env in your image or pods. In image, you need to use ENV in your Dockerfile. doc. In Kubernetes pod, doc. Second, you mentioned you want to get runtime env vars from your pod, you can run below command. kubectl exec -it ${POD_NAME} -- printenv
...haven't set the env var in the pod. I set it locally in my system Environment variables set on your host are not automatically pass on to the Pod. You can set the env in your spec and access by your container. A common approach to substitute environment variables in the spec with variables on the host is using envsubst < draft-spec.yaml > final-spec.yaml. Example if you have spec: apiVersion: v1 kind: Pod metadata: labels: run: busybox name: busybox spec: restartPolicy: Never containers: - name: busybox image: busybox imagePullPolicy: IfNotPresent command: ["ash","-c","echo ${CONTAINER_MESSAGE}"] env: - name: CONTAINER_MESSAGE value: $HOST_MESSAGE You can run HOST_MESSAGE='hello, world!' envsubst '{$HOST_MESSAGE}' < busybox.yaml | kubectl apply -f -. This will substitute $HOST_MESSAGE with "hello, world!" but will not touch ${CONTAINER_MESSAGE}. This approach does not depends on ConfigMap and it allows you to use kubectl set env to update the variable after deployed.
How to use nslookup passing the dns through a variable? [duplicate]
This question already has answers here: Difference between ${} and $() in Bash [duplicate] (3 answers) Closed 2 years ago. Friends, I am trying the implement a init container which will check if MYSQL is ready for connections and I am trying to use nslookup for this. The point is, how to pass the dns through a variable? It worked like this: command: ['sh', '-c', 'until nslookup mysql-primary.default.svc.cluster.local; do echo waiting for mysql; sleep 2; done;'] But not like this: command: ['sh', '-c', 'until nslookup $(MYSQL_HOST); do echo waiting for mysql; sleep 2; done;'] Any Idea how I could get the second option working?
MYSQL_HOST seems to be an environment variable and not a command. $(MYSQL_HOST) will execute MYSQL_HOST as a command in a subshell (and that will not work in this case). You probably want to use "${MYSQL_HOST}" instead.
The problem is, that $() executes a subshell and tries to evaluate a commend in there. What you actually want is variable expansion via ${}. Here a working example for you: A pod with an init-container, with a MYSQL_HOST environment variable: --- apiVersion: v1 kind: Pod metadata: name: mysql-pod spec: containers: - name: busybox-container image: busybox command: ['sh', '-c', 'echo The app is running! && sleep 3600'] initContainers: - name: mysql-check image: busybox command: ['sh', '-c', 'until nslookup ${MYSQL_HOST}; do echo waiting for mysql; sleep 2; done;'] env: - name: MYSQL_HOST value: "mysql-primary.default.svc.cluster.local" The pod starts after you create a corresponding service: kubectl create svc clusterip mysql-primary --tcp=3306 For the sake of completeness: YAML of the service (not necessarily relevant in this case) --- apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: mysql-primary name: mysql-primary spec: ports: - name: "3306" port: 3306 protocol: TCP targetPort: 3306 selector: app: mysql-primary type: ClusterIP status: loadBalancer: {}
get device mounts info on kubernetes node using pod
Team, I have below pod.yaml that outputs the pod's mount info but now I want it to show me the node's mount info instead or also that info. any hint how can I give privilege to the pod such that it runs the same command on the k8s hosts on which the pod is running and list that in output of pods logs? apiVersion: v1 kind: Pod metadata: name: command-demo labels: purpose: demonstrate-command spec: containers: - name: command-demo-container image: debian command: ["/bin/bash", "-c"] args: - | echo $HOSTNAME && mount | grep -Ec '/dev/sd.*\<csi' | awk '$0 <= 64 { print "Mounts are less than 64, that is found", $0 ;} $0 > 64 { print "Mounts are more than 64", $0 ;}' restartPolicy: OnFailure kubectl logs pod/command-demo command-demo Mounts are less than 64, that is found 0 expected output: k8s_node1 << this is hostname of the k8s node on which above pod us running Mounts are more than 64, that is found 65 what change do i need to do in my pod.yaml such that it runs the shell command on node and not on pod?
You cannot access host filesystem inside the docker container unless you mount part's of the host filesystem as volume. You can try mounting the whole host filesytem into the pod as follows. You might need to privileged securityContext for the pod depending on what you are trying to do. apiVersion: v1 kind: Pod metadata: name: dummy spec: containers: - name: busybox image: busybox command: ["/bin/sh"] args: ["-c", "sleep 3600"] volumeMounts: - name: host mountPath: /host volumes: - name: host hostPath: path: / type: Directory Alternative method and probably better way is to SSH into the host machine from the pod and run the command. You can get the host IP using downward API - https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/
How to set multiple commands in one yaml file with Kubernetes?
In this official document, it can run command in a yaml config file: https://kubernetes.io/docs/tasks/configure-pod-container/ apiVersion: v1 kind: Pod metadata: name: hello-world spec: # specification of the pod’s contents restartPolicy: Never containers: - name: hello image: "ubuntu:14.04" env: - name: MESSAGE value: "hello world" command: ["/bin/sh","-c"] args: ["/bin/echo \"${MESSAGE}\""] If I want to run more than one command, how to do?
command: ["/bin/sh","-c"] args: ["command one; command two && command three"] Explanation: The command ["/bin/sh", "-c"] says "run a shell, and execute the following instructions". The args are then passed as commands to the shell. In shell scripting a semicolon separates commands, and && conditionally runs the following command if the first succeed. In the above example, it always runs command one followed by command two, and only runs command three if command two succeeded. Alternative: In many cases, some of the commands you want to run are probably setting up the final command to run. In this case, building your own Dockerfile is the way to go. Look at the RUN directive in particular.
My preference is to multiline the args, this is simplest and easiest to read. Also, the script can be changed without affecting the image, just need to restart the pod. For example, for a mysql dump, the container spec could be something like this: containers: - name: mysqldump image: mysql command: ["/bin/sh", "-c"] args: - echo starting; ls -la /backups; mysqldump --host=... -r /backups/file.sql db_name; ls -la /backups; echo done; volumeMounts: - ... The reason this works is that yaml actually concatenates all the lines after the "-" into one, and sh runs one long string "echo starting; ls... ; echo done;".
If you're willing to use a Volume and a ConfigMap, you can mount ConfigMap data as a script, and then run that script: --- apiVersion: v1 kind: ConfigMap metadata: name: my-configmap data: entrypoint.sh: |- #!/bin/bash echo "Do this" echo "Do that" --- apiVersion: v1 kind: Pod metadata: name: my-pod spec: containers: - name: my-container image: "ubuntu:14.04" command: - /bin/entrypoint.sh volumeMounts: - name: configmap-volume mountPath: /bin/entrypoint.sh readOnly: true subPath: entrypoint.sh volumes: - name: configmap-volume configMap: defaultMode: 0700 name: my-configmap This cleans up your pod spec a little and allows for more complex scripting. $ kubectl logs my-pod Do this Do that
If you want to avoid concatenating all commands into a single command with ; or && you can also get true multi-line scripts using a heredoc: command: - sh - "-c" - | /bin/bash <<'EOF' # Normal script content possible here echo "Hello world" ls -l exit 123 EOF This is handy for running existing bash scripts, but has the downside of requiring both an inner and an outer shell instance for setting up the heredoc.
I am not sure if the question is still active but due to the fact that I did not find the solution in the above answers I decided to write it down. I use the following approach: readinessProbe: exec: command: - sh - -c - | command1 command2 && command3 I know my example is related to readinessProbe, livenessProbe, etc. but suspect the same case is for the container commands. This provides flexibility as it mirrors a standard script writing in Bash.
IMHO the best option is to use YAML's native block scalars. Specifically in this case, the folded style block. By invoking sh -c you can pass arguments to your container as commands, but if you want to elegantly separate them with newlines, you'd want to use the folded style block, so that YAML will know to convert newlines to whitespaces, effectively concatenating the commands. A full working example: apiVersion: v1 kind: Pod metadata: name: myapp labels: app: myapp spec: containers: - name: busy image: busybox:1.28 command: ["/bin/sh", "-c"] args: - > command_1 && command_2 && ... command_n
Here is my successful run apiVersion: v1 kind: Pod metadata: labels: run: busybox name: busybox spec: containers: - command: - /bin/sh - -c - | echo "running below scripts" i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done name: busybox image: busybox
Here is one more way to do it, with output logging. apiVersion: v1 kind: Pod metadata: labels: type: test name: nginx spec: containers: - image: nginx name: nginx volumeMounts: - name: log-vol mountPath: /var/mylog command: - /bin/sh - -c - > i=0; while [ $i -lt 100 ]; do echo "hello $i"; echo "$i : $(date)" >> /var/mylog/1.log; echo "$(date)" >> /var/mylog/2.log; i=$((i+1)); sleep 1; done dnsPolicy: ClusterFirst restartPolicy: Always volumes: - name: log-vol emptyDir: {}
Here is another way to run multi line commands. apiVersion: batch/v1 kind: Job metadata: name: multiline spec: template: spec: containers: - command: - /bin/bash - -exc - | set +x echo "running below scripts" if [[ -f "if-condition.sh" ]]; then echo "Running if success" else echo "Running if failed" fi name: ubuntu image: ubuntu restartPolicy: Never backoffLimit: 1
Just to bring another possible option, secrets can be used as they are presented to the pod as volumes: Secret example: apiVersion: v1 kind: Secret metadata: name: secret-script type: Opaque data: script_text: <<your script in b64>> Yaml extract: .... containers: - name: container-name image: image-name command: ["/bin/bash", "/your_script.sh"] volumeMounts: - name: vsecret-script mountPath: /your_script.sh subPath: script_text .... volumes: - name: vsecret-script secret: secretName: secret-script I know many will argue this is not what secrets must be used for, but it is an option.