I installed minikube as instructed here https://github.com/kubernetes/minikube/releases
and started with with a simple minikube start command.
But the next step, which is as simple as kubectl get pods --all-namespaces fails with
Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout
What did I miss?
I ran into the same issue using my Mac and basically I uninstalled both minikube and Kubectl and installed it as follows:
Installed Minikube.
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.8.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
Installed Kubectl.
curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
Start a cluster, run the command:
minikube start
Minikube will also create a โminikubeโ context, and set it to default in kubectl. To switch back to this context later, run this command:
kubectl config use-context minikube
Now to get the list of all pods run the command:
kubectl get pods --all-namespaces
Now you should be able to get the list of pods. Also make sure that you don't have a firewall within your network that blocks the connections.
I faced a similar issue on win7 when changed work environment, as you said it is working fine at home but not working at office, high chance it caused by firewall policy, cannot pass TLS verification.
Instead of waste time on troubleshoot(sometimes nothing to do if you cannot turn off firewall), if you just want to test local minikube cluster, would suggest to disable TLS verification.
This is what I have done:
# How to disable minikube TLS verification
## disable TLS verification
$ VBoxManage controlvm minikube natpf1 k8s-apiserver,tcp,127.0.0.1,8443,,8443
$ VBoxManage controlvm minikube natpf1 k8s-dashboard,tcp,127.0.0.1,30000,,30000
$ kubectl config set-cluster minikube-vpn --server=https://127.0.0.1:8443 --insecure-skip-tls-verify
$ kubectl config set-context minikube-vpn --cluster=minikube-vpn --user=minikube
$ kubectl config use-context minikube-vpn
## test kubectl
$ kubectl get pods
## enable local docker client
$ VBoxManage controlvm minikube natpf1 k8s-docker,tcp,127.0.0.1,2374,,2376
$ eval $(minikube docker-env)
$ unset DOCKER_TLS_VERIFY
$ export DOCKER_HOST="tcp://127.0.0.1:2374"
$ alias docker='docker --tls'
## test local docker client
$ docker ps
## test minikube dashboard
curl http://127.0.0.1:30000
Also I make a small script for this for your reference.
Hope it is helpful for you.
You need to just restart minikube. Sometimes I have this problem when my computer has been off for a while. I don't think you need to reinstall anything.
First verify you are in the correct context
$ kubectl config current-context
minikube
Check Minikube status (status should show "Running", mine below showed "Saved")
$ minikube status
minikube: Saved
cluster:
kubectl:
Restart minikube
$ minikube start
Starting local Kubernetes v1.8.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Verify it is running (This is what you should see)
$ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
I had this issue when connected to Cisco AnyConnect VPN. Once I disconnected, minikube ran fine. Discussion on github here: https://github.com/kubernetes/minikube/issues/4540
Related
I want to have a container that can access and run kubectl command on my host machine. Here is what I have:
I have installed Kubernetes and Minikube on my host machine.
I used this docker container: helm-kubectl link
This is the command I run my docker:
docker run -it -v ~/.kube:/root/.kube -v ~/.minikube:/Users/xxxx/.minikube dtzar/helm-kubectl
Inside the container, when I checked the cluster, I can see the context has loaded my minikube, However, I can't run another kubectl command due to the reason "The connection to the server 127.0.0.1:32768 was refused - did you specify the right host or port?".
bash-5.0# kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
docker-desktop docker-desktop docker-desktop
docker-for-desktop docker-desktop docker-desktop
* minikube minikube minikube
bash-5.0# kubectl get all
The connection to the server 127.0.0.1:32768 was refused - did you specify the right host or port?
I have checked my Kubenetes config at ~/.kube and the port is 32768.
- cluster:
certificate-authority: /Users/xxx/.minikube/ca.crt
server: https://127.0.0.1:32768
name: minikube
I have tried port -p 32768 or --expose 32768 but no luck. So anyone can help this?
Thanks zerkms! It works with --network host
I'm trying to connect to Hyperkit to check containers running on this VM.
All I'm getting now is [screen is terminating]
Here is what I do:
MacBook-Pro-Karol: ~
โ minikube start --driver=hyperkit
๐ minikube v1.12.3 na Darwin 10.15.6
โจ Using the hyperkit driver based on user configuration
๐ Starting control plane node minikube in cluster minikube
๐ฅ Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
๐ณ preparing Kubernetes v1.18.3 on Docker 19.03.12...
๐ Verifying Kubernetes components...
๐ Enabled addons: default-storageclass, storage-provisioner
๐ Ready! kubectl is configured to be used with "minikube".
MacBook-Pro-Karol: ~
โ sudo screen /Users/karol/.minikube/machines/minikube/tty
Password:
[screen is terminating]
MacBook-Pro-Karol: ~
โ screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
[screen is terminating]
Cannot exec '/Users/karol/Library/Containers/com.docker.docker/Data/vms/0/tty': Permission denied
โ sudo screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
Password:
[screen is terminating]
Cannot exec '/Users/karol/Library/Containers/com.docker.docker/Data/vms/0/tty': Operation not permitted
Any help would be appreciated.
You can use minikube ssh to login in to VM that minikube runs in:
Log into or run a command on a machine with SSH; similar to
โdocker-machine sshโ.
minikube ssh [flags]
and then use docker ps to check the running containers inside this VM:
$ docker ps | grep kube-api
f53aebd26287 7e28efa976bd "kube-apiserver --adโฆ" 16 minutes ago k8s_kube-apiserver_kube-apiserver-minikube_kube-system_8009646ba816631d0677c2668886baad_1
12188a523d12 k8s.gcr.io/pause:3.2 "/pause" 16 minutes ago k8s_POD_kube-apiserver-minikube_kube-system_8009646ba816631d0677c2668886baad_1
I am trying to start minikube cluster on my macOS but i get always "Permission denied"
(base) MacBook-Pro-de-..:desktop ..$ minikube start
-bash: /usr/local/bin/minikube: Permission denied
What i should do ?
Execute following commands to add permissions to files:
$ chmod ugo+rwx ~/.kube/config
$ sudo chown -R $USER ~/.kube
$ chmod +x your-minikube-localization
Configure proxy:
export no_proxy=$no_proxy,$(minikube ip)
export NO_PROXY=$no_proxy,$(minikube ip)
Then run minikube command taking proxy under consideration (IPs set below are just example):
$ minikube start --alsologtostderr --kubernetes-version v1.13.1 --docker-env HTTP_PROXY=http://10.0.2.2:1087 --docker-env HTTPS_PROXY=http://10.0.2.2:1087 --docker-env NO_PROXY=10.0.2.2,192.168.99.100
$ minikube start --alsologtostderr --kubernetes-version v1.13.2 --docker-env HTTP_PROXY=http://10.0.2.2:3128 --docker-env HTTPS_PROXY=http://10.0.2.2:3128 --docker-env NO_PROXY=10.0.2.2,192.168.99.100
In this case proxy configuration:
HTTP_PROXY=http://127.0.0.1:3128
Please must remember to add your minikube IP to NO_PROXY.
Similar problems you can find here: file-permission, kubeconfig.
I got now this Error:
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
E0301 15:19:14.198136 48335 start.go:268] Error setting up kubeconfig: Error reading file "/../.kube/config": open /../.kube/config: not a directory
E0301 15:19:15.128758 48335 util.go:151] Error uploading error message: Error sending error report to https://clouderrorreporting.googleapis.com/v1beta1/projects/k8s-minikube/events:report?key=AIzaSyACUwzG0dEPcl-eOgpDKnyKoUFgHdfoFuA, got response code 400
I've used the helm chart to deploy Spark to Kubernetes in GCE. According to default configuration in values.yaml the Spark is deployed to the path /opt/spark. I've checked that Spark has deployed successfully by running kubectl --namespace=my-namespace get pods -l "release=spark". There is 1 master and 3 workers running.
However when I've tried to check Spark version by executing spark-submit --version from the Google cloud console it returned -bash: spark-submit: command not found.
I've navigated to the /opt directory and the /spark folder is missing. What should I do to be able to open Spark shell Terminal and to execute Spark commands?
You can verify by checking service
kubectl get services -n <namespace>
you can port-forward particular service and try running locally to check
kubectl port-forward svc/<service name> <external port>:<internal port or spark running port>
Locally you can try running spark terminal it will be connected to spark running on GCE instance.
If you check the helm chart document there is also options for UI you can also do same to access UI via port-forward
Access via SSH inside pod
Kubectl exec -it <spark pod name> -- /bin/bash
here you can directly run spark commands. spark-submit --version
Access UI
Access UI via port-forwarding if you have enable UI in helm chart.
kubectl port-forward svc/<spark service name> <external port>:<internal port or spark running port>
External Load balancer
This particular helm chart also creating External Load balancer you can also get External IP using
Kubectl get svc -n <namespace>
Access Shell
If want to connect via LB IP & port
./bin/spark-shell --conf spark.cassandra.connection.host=<Load balancer IP> spark.cassandra-connection.native.port=<Port>
Creating connection using port-forward
kubectl port-forward svc/<spark service name> <external(local) port>:<internal port or spark running port>
./bin/spark-shell --conf spark.cassandra.connection.host=localhost spark.cassandra-connection.native.port=<local Port>
One way would be login to pod and then run Spark commands
List the pod
kubectl --namespace=my-namespace get pods -l "release=spark"
Now, Login to the pod using following command:
kubectl exec -it <pod-id> /bin/bash
Now, you should be inside the pod and can run spark commands
spark-submit --version
Ref: https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/#getting-a-shell-to-a-container
Hope this helps.
This worked for me.
spark-shell --master k8s://localhost:32217
My spark master is a LoadBalancer exposed at localhost:32217
I have followed the steps at https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html to launch a multi-node Kubernetes cluster using Vagrant and CoreOS.
But,I could not find a way to set an insecure docker registry for that environment.
To be more specific, when I run
kubectl run api4docker --image=myhost:5000/api4docker:latest --replicas=2 --port=8080
on this set up, it tries to get the image thinking it is a secure registry. But, it is an insecure one.
I appreciate any suggestions.
This is how I solved the issue for now. I will add later if I can automate it on Vagrantfile.
cd ./coreos-kubernetes/multi-node/vagrant
vagrant ssh w1 (and repeat these steps for w2, w3, etc.)
cd /etc/systemd/system/docker.service.d
sudo vi 50-insecure-registry.conf
add below line to this file
[Service]
Environment=DOCKER_OPTS='--insecure-registry="<your-registry-host>/24"'
after adding this file, we need to restart the docker service on this worker.
sudo systemctl stop docker
sudo systemctl daemon-reload
sudo systemctl start docker
sudo systemctl status docker
now, docker pull should work on this worker.
docker pull <your-registry-host>:5000/api4docker
Let's try to deploy our application on Kubernetes cluster one more time.
Logout from the workers and come back to your host.
$ kubectl run api4docker --image=<your-registry-host>:5000/api4docker:latest --replicas=2 --port=8080 โenv="SPRING_PROFILES_ACTIVE=production"
when you get the pods, you should see the status running.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
api4docker-2839975483-9muv5 1/1 Running 0 8s
api4docker-2839975483-lbiny 1/1 Running 0 8s