Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it. -Microk8s - windows

When i do this command kubectl get pods --all-namespaces I get this Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
All of my pods are running and ready 1/1, but when I use this microk8s kubectl get service -n kube-system I get
kubernetes-dashboard ClusterIP 10.152.183.132 <none> 443/TCP 6h13m
dashboard-metrics-scraper ClusterIP 10.152.183.10 <none> 8000/TCP 6h13m
I am missing kube-dns even tho dns is enabled. Also when I type this for proxy for all ip addresses microk8s kubectl proxy --accept-hosts=.* --address=0.0.0.0 & I only get this Starting to serve on [::]:8001 and I am missing [1]84623 for example.
I am using microk8s and multipass with Hyper-V Manager on Windows, and I can't go to dashboard on the net. I am also a beginner, this is for my college paper. I saw something similar online but it was for Azure.

Posting answer from comments for better visibility:
Problem solved by reinstalling multipass and microk8s. Now it works.

Related

Seldon-core deployment in GKE private cluster with Anthos Service Mesh

I'm trying to use GKE private cluster with standard config, with the Anthos service mesh managed profile. However, when I try to deploy "Iris" model for the test, the deployment stuck in calling "storage.googleapis.com":
$ kubectl get all -n test
NAME READY STATUS RESTARTS AGE
pod/iris-model-default-0-classifier-dfb586df4-ltt29 0/3 Init:1/2 0 30s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/iris-model-default ClusterIP xxx.xxx.65.194 <none> 8000/TCP,5001/TCP 30s
service/iris-model-default-classifier ClusterIP xxx.xxx.79.206 <none> 9000/TCP,9500/TCP 30s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/iris-model-default-0-classifier 0/1 1 0 31s
NAME DESIRED CURRENT READY AGE
replicaset.apps/iris-model-default-0-classifier-dfb586df4 1 1 0 31s
$ kubectl logs -f -n test pod/iris-model-default-0-classifier-dfb586df4-ltt29 -c classifier-model-initializer
2022/11/19 20:59:34 NOTICE: Config file "/.rclone.conf" not found - using defaults
2022/11/19 20:59:57 ERROR : GCS bucket seldon-models path v1.15.0-dev/sklearn/iris: error reading source root directory: Get "https://storage.googleapis.com/storage/v1/b/seldon-models/o?alt=json&delimiter=%2F&maxResults=1000&prefix=v1.15.0-dev%2Fsklearn%2Firis%2F&prettyPrint=false": dial tcp 199.36.153.8:443: connect: connection refused
2022/11/19 20:59:57 ERROR : Attempt 1/3 failed with 1 errors and: Get "https://storage.googleapis.com/storage/v1/b/seldon-models/o?alt=json&delimiter=%2F&maxResults=1000&prefix=v1.15.0-dev%2Fsklearn%2Firis%2F&prettyPrint=false": dial tcp 199.36.153.8:443: connect: connection refused
2022/11/19 21:00:17 ERROR : GCS bucket seldon-models path v1.15.0-dev/sklearn/iris: error reading source root directory: Get "https://storage.googleapis.com/storage/v1/b/seldon-models/o?alt=json&delimiter=%2F&maxResults=1000&prefix=v1.15.0-dev%2Fsklearn%2Firis%2F&prettyPrint=false": dial tcp 199.36.153.8:443: connect: connection refused
2022/11/19 21:00:17 ERROR : Attempt 2/3 failed with 1 errors and: Get "https://storage.googleapis.com/storage/v1/b/seldon-models/o?alt=json&delimiter=%2F&maxResults=1000&prefix=v1.15.0-dev%2Fsklearn%2Firis%2F&prettyPrint=false": dial tcp 199.36.153.8:443: connect: connection refused
I used "sidecar injection" with the namespace labeling:
kubectl create namespace test
kubectl label namespace test istio-injection- istio.io/rev=asm-managed --overwrite
kubectl annotate --overwrite namespace test mesh.cloud.google.com/proxy='{"managed":"true"}'
When I don't use "sidecar injection", the deployment was quite successful. But in this case I need to inject the proxy manually to get the accesss to the model API. I wonder if this is the intended operation or not.
Istio sidecars will block connectivity on other init containers. This is a known issue with Istio sidecars unfortunately. A potential workaround is to ask Istio to don't "filter" traffic going to storage.googleapis.com (i.e. don't route that traffic through Istio's egress), which can be done through Istio's excludeIPRanges flag.
In the longer term, due to these shortcomings, Istio seems to be moving away from sidecars into their new "Ambient mesh".

How to use microk8s kubectl after host reboot (Hyper-V)

I have a fully functional MicroK8s running in my Hyper-V. After my host rebooted, I can't use microk8s kubectl anymore. I always get the following error:
microk8s kubectl get node -o wide
Unable to connect to the server: dial tcp 172.31.119.125:16443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
As I know, the master node IP been changed. If I update the KUBECONFIG locally, I can connect to cluster without problem.
microk8s config > ~/.kube/config
But if I want to use microk8s kubectl get node -o wide to get node status, I always can't get it working. I'm still unable to connect to the server.
I tried to clear all possible cache by removing all .kube/cache folders. But still not working.
sudo rm -rf /.kube/cache /root/.kube/cache /home/ubuntu/.kube/cache /var/snap/microk8s/3582/.kube/cache
I stopped and started MicroK8s again. I'm still unable to connect to the server.
microk8s stop
microk8s start
After MicroK8s restarted, I also tried to find out all files that contains 172.31.119.125 ip address.
grep '172.31.119.125' -r /
Nothing useful found. Only /var contains some logs with 172.31.119.125. That's so weird. Is there anything I can try? How to connect to MicroK8s using microk8s kubectl?
After 1 hours deep dive, I finally realized there is a $env:LOCALAPPDATA\MicroK8s\config file used by MicroK8s which the doc never said.
All you need to do is the update the config file by the following ways:
PowerShell
microk8s config > $env:LOCALAPPDATA\MicroK8s\config
Command Prompt
microk8s config > %LOCALAPPDATA%\MicroK8s\config

kubectl port-forward does not return when connection lost

The help of the kubectl port-forward says The forwarding session ends when the selected pod terminates, and rerun of the command is needed to resume forwarding.
Although it does not auto-reconnect when the pod terminates the command does not return either and just hangs with errors:
E0929 11:57:50.187945 62466 portforward.go:400] an error occurred forwarding 8000 -> 8080: error forwarding port 8080 to pod a1fe1d167955e1c345e0f8026c4efa70a84b9d46029037ebc5b69d9da5d30249, uid : network namespace for sandbox "a1fe1d167955e1c345e0f8026c4efa70a84b9d46029037ebc5b69d9da5d30249" is closed
Handling connection for 8000
E0929 12:02:44.505938 62466 portforward.go:400] an error occurred forwarding 8000 -> 8080: error forwarding port 8080 to pod a1fe1d167955e1c345e0f8026c4efa70a84b9d46029037ebc5b69d9da5d30249, uid : failed to find sandbox "a1fe1d167955e1c345e0f8026c4efa70a84b9d46029037ebc5b69d9da5d30249" in store: not found
I would like it to return so that I can handle this error and make the script that will rerun it.
Is there any way or workaround for how to do it?
Based on the information, described on Kubernetes Issues page on GitHub, I can suppose that it is a normal behavior for your case: port-forward connection cannot be canceled on pod deletion, since there is no a connection management inside REST connectors on server side.
A connection being maintained from kubectl all the way through to the kubelet hanging open even if the pod doesn't exist.
We'll proxy a websocket connection kubectl->kubeapiserver->kubelet on port-forward.
Recursive function?
kpf(){ kubectl port-forward $type/$object $LOCAL:$REMOTE $ns || kpf; }
Also check this

how to resolve micok8s port forwarding error on vagrant VMs?

have a 2 node microk8s cluster running on 2 Vagrant VMs (Ubuntu 20.04). trying to forward port forward 443 from host so I can connect to dashboard from the host PC over the private VM network.
sudo microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443
receive the following error:
error: error upgrading connection: error dialing backend: dial tcp: lookup node-1: Temporary failure in name resolution
also noticed that the internal IPs for the nodes are not correct:
the master node is provisioned with an IP of 10.0.1.5 and the worker node 10.0.1.10. in the listing from kubectl both nodes have the same IP of 10.0.2.15.
not sure how to resolve this issue.
note I am able to access the dashboard login screen from http and port 8001 connecting to 10.0.1.5. but submitting the token does not do anything as per the K8s security design:
Logging in is only available when accessing Dashboard over HTTPS or when domain is either localhost
or 127.0.0.1. It's done this way for security reasons.
was able to get passed this issue by adding the nodes to the /etc/hosts file on each node:
10.1.0.10 node-1
10.1.0.5 k8s-master
then was able to restart and issue the port forward command:
sudo microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443 --address 0.0.0.0
Forwarding from 0.0.0.0:10443 -> 8443
then was able to access the K8s dashboard via the token auth method

Kubernetes - Windows 10 - connectex: No connection could be made because the target machine actively refused it

I've installed Kubernetes on windows 10 pro. I ran into a problem where the UI wasn't accepting the access token I had generated for some reason.
So I went into docker and reset the cluster so I could start over:
But now when I try to apply my configuration again I get an error:
kubectl apply -f .\recommended.yaml
Unable to connect to the server: dial tcp 127.0.0.1:61634: connectex: No connection could be made because the target machine actively refused it.
I have my KUBECONFIG variable set:
$env:KUBECONFIG
C:\Users\bluet\.kube\config
And I have let kubernetes know about the config with this command:
[Environment]::SetEnvironmentVariable("KUBECONFIG", $HOME + "\.kube\config", [EnvironmentVariableTarget]::Machine)
Yet, the issue remains! How can I resolve this? Docker seems fine.
This stack overflow answered my question.
This is what it says:
If you have kubectl already installed and pointing to some other environment, such as minikube or a GKE cluster, be sure to change
context so that kubectl is pointing to docker-desktop:
kubectl config get-contexts
kubectl config use-context docker-desktop
Apparently I had installed minikube which is what messed it up. Switching back to a docker context is what saved the day.

Resources