Connect to remote Kubernetes cluster in private LAN from Windows 10 - windows

EDIT: I'm going to leave this up but I moved away from Canonical Kubernetes to a microk8 install and everything "just worked." I would not recommend Canonical Kubernetes at this time (early 2019).
Goal:
I want to connect to the Canonical Kubernetes cluster running on Ubuntu 18.04 box (192.168.2.148) on my Windows machine (192.168.2.40). I installed the cluster via conjure-up.
Problem:
running kubectl cluster-info on windows machine gives me:
Unable to connect to the server: dial tcp 10.91.211.64:443: connectex: A connection attempt
failed because the connected party did not properly respond after a period of time,
or established connection failed because connected host has failed to respond.
I have ssh'd to the ubuntu box and copied the ~/.kube/config file to Windows.
~/.kube/config:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <BIG LONG STRING O STUFF>
server: https://10.91.211.64:443
name: conjure-canonical-kubern-931
contexts:
- context:
cluster: conjure-canonical-kubern-931
user: conjure-canonical-kubern-931
name: conjure-canonical-kubern-931
current-context: conjure-canonical-kubern-931
kind: Config
preferences: {}
users:
- name: conjure-canonical-kubern-931
user:
password: <Smaller String>
username: admin
Background:
I have a spare Ubuntu 18.04 LTS server (192.168.2.148) on my home LAN that I've used conjure-up to install the Canonical Kubernetes Install.
I've successfully installed the cluster and it seems to be working. I can ssh and see kubectl cluster-info and see the Master, Heapster, KubeDNS, Metrics-server, Grafana and InfluxDB all running.
Kubernetes master is running at https://10.91.211.64:443
Heapster is running at https://10.91.211.64:443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://10.91.211.64:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://10.91.211.64:443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
Grafana is running at https://10.91.211.64:443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
InfluxDB is running at https://10.91.211.64:443/api/v1/namespaces/kube-system/services/monitoring-influxdb:http/proxy
along with juju status looking like everything is up and running:
Model Controller Cloud/Region
Version SLA Timestamp
conjure-canonical-kubern-931 conjure-up-localhost-673 localhost/localhost 2.4.3 unsupported 02:01:00Z
App Version Status Scale Charm Store Rev OS Notes
easyrsa 3.0.1 active 1 easyrsa jujucharms 195 ubuntu
etcd 3.2.10 active 3 etcd jujucharms 378 ubuntu
flannel 0.10.0 active 5 flannel jujucharms 351 ubuntu
kubeapi-load-balancer 1.14.0 active 1 kubeapi-load-balancer jujucharms 525 ubuntu exposed
kubernetes-master 1.13.2 active 2 kubernetes-master jujucharms 542 ubuntu
kubernetes-worker 1.13.2 active 3 kubernetes-worker jujucharms 398 ubuntu exposed
Unit Workload Agent Machine Public address Ports Message
easyrsa/0* active idle 0 10.91.211.138 Certificate Authority connected.
etcd/0 active idle 1 10.91.211.120 2379/tcp Healthy with 3 known peers
etcd/1* active idle 2 10.91.211.205 2379/tcp Healthy with 3 known peers
etcd/2 active idle 3 10.91.211.41 2379/tcp Healthy with 3 known peers
kubeapi-load-balancer/0* active idle 4 10.91.211.64 443/tcp Loadbalancer ready.
kubernetes-master/0 active idle 5 10.91.211.181 6443/tcp Kubernetes master running.
flannel/0* active idle 10.91.211.181 Flannel subnet 10.1.50.1/24
kubernetes-master/1* active idle 6 10.91.211.218 6443/tcp Kubernetes master running.
flannel/1 active idle 10.91.211.218 Flannel subnet 10.1.85.1/24
kubernetes-worker/0* active idle 7 10.91.211.29 80/tcp,443/tcp Kubernetes worker running.
flannel/4 active idle 10.91.211.29 Flannel subnet 10.1.94.1/24
kubernetes-worker/1 active idle 8 10.91.211.70 80/tcp,443/tcp Kubernetes worker running.
flannel/3 active idle 10.91.211.70 Flannel subnet 10.1.46.1/24
kubernetes-worker/2 active idle 9 10.91.211.167 80/tcp,443/tcp Kubernetes worker running.
flannel/2 active idle 10.91.211.167 Flannel subnet 10.1.30.1/24
Entity Meter status Message
model amber user verification pending
Machine State DNS Inst id Series AZ Message
0 started 10.91.211.138 juju-86bdea-0 bionic Running
1 started 10.91.211.120 juju-86bdea-1 bionic Running
2 started 10.91.211.205 juju-86bdea-2 bionic Running
3 started 10.91.211.41 juju-86bdea-3 bionic Running
4 started 10.91.211.64 juju-86bdea-4 bionic Running
5 started 10.91.211.181 juju-86bdea-5 bionic Running
6 started 10.91.211.218 juju-86bdea-6 bionic Running
7 started 10.91.211.29 juju-86bdea-7 bionic Running
8 started 10.91.211.70 juju-86bdea-8 bionic Running
9 started 10.91.211.167 juju-86bdea-9 bionic Running

Related

not able to start aerokube moon on linux (ubuntu)

I am trying to setup aerokube moon on a linux machine(Ubuntu 16.04) .
Steps followed :
minikube installed and ingress is enabled.
moon installed using https://aerokube.com/moon/latest/#install-helm .
started minikube using docker driver
$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
kubectl get pods -n moon
NAME READY STATUS RESTARTS AGE
moon-5f6fd5f9fd-7b945 3/3 Running 0 10m
moon-5f6fd5f9fd-fcct6 3/3 Running 0 10m
$minikube tunnel
Status:
machine: minikube
pid: 148130
route: 10.96.0.0/12 -> xxx.xxx.xx.x
minikube: Running
services: []
errors:
minikube: no errors
router: no errors
loadbalancer emulator: no errors
$
cat /etc/hosts
127.0.0.1 localhost moon.aerokube.local
xxx.xxx.xxx.xxx moon.aerokube.local --> this ip is output of `$minikube ip`
Issue 1 : when I try to access http://moon.aerokube.local/ .I get
Issue2 how to change the default port for selenium
I would like to change the default port for selenium in moon as my 8080 and 4444 port are already occupied.
I would like to use some other port for moon ui and /wd/hub
I assume ,this probably will be accessible from that linux machine itself(I can't check directly on that machine) as it is pointed to localhost in /etc/host. but I dont know how to make it accessible from other places (facing issues no 2 mentioned in this post) like from our laptops for people working on this project .
Please help

Windows RKE2 nodes networking isn't working

In AWS, I have created a RKE2 cluster using the Rancher 2.6.2 UI.
There are two Ubuntu 20.04 control plane nodes, and pods on these hosts can reach other pods/ the internet.
My Windows node (Server 2019, 1809 Datacenter) joins the cluster without any issues, however, the Windows containers cannot seem to reach any other network with curl or ping.
Checking the process that RKE2 uses:
Get-Process -Name rke2,containerd,calico-node,kube-proxy,kubelet
Calico-Node is not running. I checked event viewer for the RKE2 application logs and found:
Felix exited: exit status 129
Is there somewhere to check for more detailed logs? RKE2 seems to have things moved around, and it doesn't line up with the Calico docs.
I have disabled source/dest checks on the network interfaces of all of the Kubernetes nodes in AWS.
I also created a Windows firewall rule to allow Kubelet on 10250.

Unable to reach web server in Docker swarm from the host

I'm starting out using Docker on macOS, and get stuck when trying to complete part 4 of the Get Started guide. I have created two extra virtual machines (myvm1 and myvm2), set myvm1 as swarm manager, and myvm2 as a worker.
I have then deployed a stack with 5 Flask web servers using the docker-compose.yml from part 3 of the tutorial. The processes seem to start fine, and are distributed between the two machines, but I am not able to reach them from the host using a browser.
How should I configure the port forwarding/network to be able to reach the web servers in the swarm from the host of the virtual machines running the docker container?
The following is a list of commands that I have run, some with resulting output.
$ docker-machine create --driver virtualbox myvm1
$ docker-machine create --driver virtualbox myvm2
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
myvm1 - virtualbox Running tcp://192.168.99.100:2376 v18.09.0
myvm2 - virtualbox Running tcp://192.168.99.101:2376 v18.09.0
$ docker-machine ssh myvm1 "docker swarm init --advertise-addr 192.168.99.100"
$ docker-machine ssh myvm2 "docker swarm join --token <my-token-inserted-here> 192.168.99.100:2377"
$ eval $(docker-machine env myvm1)
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
myvm1 * virtualbox Running tcp://192.168.99.100:2376 v18.09.0
myvm2 - virtualbox Running tcp://192.168.99.101:2376 v18.09.0
$ docker stack deploy -c docker-compose.yml getstartedlab
$ docker stack ps getstartedlab
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
it9asz4zpdmi getstartedlab_web.1 mochr/test_repo:friendly_hello myvm2 Running Preparing 18 seconds ago
645gvtnde7zz getstartedlab_web.2 mochr/test_repo:friendly_hello myvm1 Running Preparing 18 seconds ago
fpq6cvcf3e0e getstartedlab_web.3 mochr/test_repo:friendly_hello myvm2 Running Preparing 18 seconds ago
plkpximnpobf getstartedlab_web.4 mochr/test_repo:friendly_hello myvm1 Running Preparing 18 seconds ago
gr2p8a0asatb getstartedlab_web.5 mochr/test_repo:friendly_hello myvm2 Running Preparing 18 seconds ago
The docker-compose.yml:
version: "3"
services:
web:
image: mochr/test_repo:friendly_hello
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "4000:80"
networks:
- webnet
networks:
webnet:
It looks like this is a known problem with the current version of boot2docker: https://github.com/docker/machine/issues/4608
The workaround is either to use a swarm based on machines that do not require boot2docker (e.g. AWS, DigitalOcean, etc.), wait until a newer version of boot2docker is released, or use an earlier version of boot2docker, as described in that link. To use an earlier version:
export VIRTUALBOX_BOOT2DOCKER_URL=https://github.com/boot2docker/boot2docker/releases/download/v18.06.1-ce/boot2docker.iso
before creating your virtual machines with docker-machine. (Remove your existing virtual machines first, then use that export, then docker-machine create myvm1)
Then, you should be able to bring up your stack and access your containers at either 192.168.99.100:4000 or 192.168.99.101:4000 (or whatever IP addresses are revealed by docker-machine ls)

Marathon stop working when consul starts

I have 6 machines mesos cluster (3 masters and 3 slaves), I acces to mesos User interface 172.16.8.211:5050 and it works correctly and redirect to the leader if it is not. Then If I access to marathon User interface 172.16.8.211:8080 it works correctly. Summing before configuring and executing the consul-cluster marathon works well.
My problem is when I configure and run a consul cluster with 3 servers that are the mesos masters and 3 clients that are the mesos slaves. If I execute consul members it is fine, all the members alive and working together.
But now if I try to access to marathon User interface I can't, and I access to mesos User interface and I go to 'Frameworks' and does not appear marathon Framework.
ikerlan#client3:~$ consul members
Node Address Status Type Build Protocol DC
client3 172.16.8.216:8301 alive client 0.5.2 2 nyc2
client2 172.16.8.215:8301 alive client 0.5.2 2 nyc2
server2 172.16.8.212:8301 alive server 0.5.2 2 nyc2
server3 172.16.8.213:8301 alive server 0.5.2 2 nyc2
client1 172.16.8.214:8301 alive client 0.5.2 2 nyc2
server1 172.16.8.211:8301 alive server 0.5.2 2 nyc2
In Slaves tab of mesos I could see the next:
-Mesos version: 0.27.0
-Marathon version: 0.15.1
I have the next file logs, where would appear something related with this issue?
What could be the problem?
Solution:
I have see in the marathon logs '/var/log/syslog' that the problem is a problem of DNS. So I try to add the IPs of the other hosts of the cluster to the file /etc/hosts. And it resolv the problem, now it works perfectly.
You can add all the cluster hosts to the zookeeper config file, it would work

(mac) dockers , how to connect to a redis server in the hosting machine from a containter

i am running a docker container on my mac machine using boot2docker:
I want to connect to redis-server i am running my hosting machine from inside the container.
I have managed to connect from the container to a service i am running on the host machine using curl http://192.168.3.124:5000 (getting results)
I have managed to connect to it , but I am not pulling data from it according to it's state.
redisServer = redis.StrictRedis(host='192.168.3.124', port= "6379"); redisServer.get("2") (gets no results, from the hosting machine that key is set)
details:
running the redis server :
[58781] 13 May 13:53:16.120 # Server started, Redis version 2.8.19
[58781] 13 May 13:53:16.120 * DB loaded from disk: 0.000 seconds
[58781] 13 May 13:53:16.120 * The server is now ready to accept connections on port 6379
ps aux |grep redis
partuck 58781 0.0 0.0 2469924 1652 s002 S+ 1:53PM 0:00.03 redis-server *:6379
partuck 58728 0.0 0.7 2583104 115260 ?? S 1:53PM 0:00.47 /usr/local/opt/redis/bin/redis-server 127.0.0.1:6379
from the
Your host IP in the virtualbox that boot2docker setups is (typically) 10.0.2.2.
So you should try connecting to 10.0.2.2:6379

Resources