how to resolve micok8s port forwarding error on vagrant VMs? - vagrant

have a 2 node microk8s cluster running on 2 Vagrant VMs (Ubuntu 20.04). trying to forward port forward 443 from host so I can connect to dashboard from the host PC over the private VM network.
sudo microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443
receive the following error:
error: error upgrading connection: error dialing backend: dial tcp: lookup node-1: Temporary failure in name resolution
also noticed that the internal IPs for the nodes are not correct:
the master node is provisioned with an IP of 10.0.1.5 and the worker node 10.0.1.10. in the listing from kubectl both nodes have the same IP of 10.0.2.15.
not sure how to resolve this issue.
note I am able to access the dashboard login screen from http and port 8001 connecting to 10.0.1.5. but submitting the token does not do anything as per the K8s security design:
Logging in is only available when accessing Dashboard over HTTPS or when domain is either localhost
or 127.0.0.1. It's done this way for security reasons.

was able to get passed this issue by adding the nodes to the /etc/hosts file on each node:
10.1.0.10 node-1
10.1.0.5 k8s-master
then was able to restart and issue the port forward command:
sudo microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443 --address 0.0.0.0
Forwarding from 0.0.0.0:10443 -> 8443
then was able to access the K8s dashboard via the token auth method

Related

how to enable port forward with micrpk8s

I'm playing around with microk8s and I simply want to run an apache server and navigate to its default page on the same machine. I'm on a mac arm m1:
microk8s kubectl run test-pod --image=ubuntu/apache2:2.4-20.04_beta --port=80
~ $ microk8s kubectl get pods 2
NAME READY STATUS RESTARTS AGE
test-pod 1/1 Running 0 8m43s
then I try to enable the forward:
◼ ~ $ microk8s kubectl port-forward test-pod :80
Forwarding from 127.0.0.1:37551 -> 80
but:
◼ ~ $ wget http://localhost:37551
--2022-12-24 18:54:37-- http://localhost:37551/
Resolving localhost (localhost)... 127.0.0.1, ::1
Connecting to localhost (localhost)|127.0.0.1|:8080... failed: Connection refused.
Connecting to localhost (localhost)|::1|:8080... failed: Connection refused.
the logs looks ok:
◼ ~ $ microk8s kubectl logs test-pod 130
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.1.254.96. Set the 'ServerName' directive globally to suppress this message
dashboard proxy does work fine and I can navigate to it:
◼ ~ $ microk8s dashboard-proxy
Checking if Dashboard is running.
Dashboard will be available at https://192.168.64.2:10443
Answering myself:
I should use the Multipass' guest machine assigned IP. This is not docker :)
For some reason I haven't figured out, as asked here, the forwarding from the guest does not work properly on mac. I should open a guest's shell and forward from there. that way, it will work. See the answer on the linked post.
Hope this will spare some time on future mac users.

Multipass VM not reachable via specific http ports from host

I'm running an Ubuntu VM with multipass hyperkit do run microk8s. Within the VM all things checkout and available with skaffold/kubectl port forwarding. For instance:
$ multipass list
Name State IPv4 Image
microk8s-vm Running 192.168.64.2 Ubuntu 20.04 LTS
10.0.1.1
172.17.0.1
10.1.254.64
Port forwarding service/my-app in namespace default, remote port 80 -> 127.0.0.1:4503
Within the VM:curl localhost:4503 ✅
From the host: curl 192.168.64.2:4503🛑
I know the VM is reachable on port 80 because curl 192.168.64.2 returns default ngnix not found page. FWIW I never installed ngnix and the service doesn't seem to be running /cannot turn it off.
I've been at this for a day and I'm stumped. I even tried the Vbox driver and manually configured a bridge adapter. I even created my own adapter...
$ multipass exec -- microk8s-vm sudo bash -c "cat > /etc/netplan/60-bridge.yaml" <<EOF
network:
ethernets:
enp0s8: # this is the interface name from above
dhcp4: true
dhcp4-overrides: # this is needed so the default gateway
route-metric: 200 # remains with the first interface
version: 2
EOF
$ multipass exec microk8s-vm sudo netplan apply
How can I reach this VM from the host?
You cant access your pod ip /portlike this.
If you want to access your pods port over the nodes ip address, you need to define a service type NodePort and then use ipaddressOfNode:NodePort.
curl http://ipaddressOfNode:NodePort
With port-forward you must use the localhost of your host system.
kubectl port-forward svc/myservice 8000:yourServicePort
then
curl http://localhost:8000

elasticsearch setup on Gcloud VM fails

I wish to run my elasticsearch remotely on gcloud VM, this is configured to run at 127.0.0.1 at a specific port 9200. How to access this from a website outside this vm? If I change the network host to 0.0.0.0 on the yml file, even 9200 port becomes inaccessible. How do I overcome this problem?
Changed network.host: [_site_ , _local_ , _global_ ]
_site_ = internal ip given by google cloud vm,
_local_ = 127.0.0.1,
_global_ = found using curl ifconfig.me,
Opened a specific port (9200) and tried to connect with global IP address.
curl to the global ip gives
>Output: Failed to connect to (_global_ ip) port 9200: connection refused.
So put network.host:0.0.0.0 and then try to allow 9200 and 9201 port and restart the elasticsearch service.If you are using ubuntu then sudo service elasticsearch restart then check by doing curl -XGET 'http://localhost:9200?pretty'.Let me know if you are still facing any issues.
Use following configurations for elasticsearch.yml
network.host: 0.0.0.0
action.auto_create_index: false
index.mapper.dynamic: false
Solved this problem by going through the logs and found out that the public ip address is re-mapped to the internal ip address, hence network.host can't be set to external ip directly. Elasticsearch yml config is as follows:
'network.host: xx.xx.xxx.xx' is set to the internal ip (given by google),
'http.cors.enabled: true',
'http.cors.allow-origin:"*", (Do not use * in production, its a security issue)
'discovery.type: single-node' in my case to make it work independently and not in a cluster
Now this sandboxed version can be accessed from outside the VM using the external IP address given by Google.

Cento 7 Firewalld refuses all incoming connections to my web-server

I have Centos7 VM built using vagrant with private IP address of:192.168.56.255
I am running my Spring boot application on that VM on port 8443. It supports HTTPS. My issue is that when try to send https requests to 192.168.56.255 web server via Curl command i got
curl: (7) Couldn't connect to server
I have read many tutorials that explain how to configure my Firewall in Cento7 but still got the same issue one is provided by DigitalOcean
When I type
sudo firewall-cmd --list-all-zones
I got
public
target: default
icmp-block-inversion: no
interfaces:
sources:
services: ssh dhcpv6-client https http mysql
ports: 8443/tcp 3306/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
As you can see I enabled everything I need and more but still. I even shut down the Firewall but still the connection is refused from my host.
When I made the changes I did reload my firewall
sudo firewall-cmd --realod
So that is not the problem
The problem was not with the Firewalld but with the pre-configured IP address using Vagrant.
The IP address should not be 255 in the first byte as I did 192.168.56.255
because that indicates that this is a broadcast address. So i solved it by changing it to 192.168.56.10

Can't connect to public IP for EC2 instance

I have an EC2 instance which is running with the following security groups:
HTTP - TCP - 80 - 0.0.0.0/0
Custom UDP Rule - UDP - 1194 - 0.0.0.0/0
SSH - TCP - 22 - 0.0.0.0/0
Custom TCP Rule - TCP - 943 - 0.0.0.0/0
HTTPS - TCP - 443 - 0.0.0.0/0
However, when I try to access http://{PUBLIC_IP} or https://{PUBLIC_IP} in the browser, I get a "{IP} refused to connect" error. I'm new to AWS. Am I missing something here? What should I do to debug?
One way to debug this particular class of problem is to use netcat in order to determine where the problem lies.
If you run netcat against port 80 on the public IP address of your instance and just get a hang (no output at all), then most likely your security group isn't allowing traffic through. Here is an example from an EC2 instance that is in a security group that doesn't allow port 80 traffic inbound:
% nc -v 55.35.300.45 80
<just hangs>
Whereas if the security group is changed to allow port 80, but the EC2 instance doesn't have any process listening on port 80, you'll get the following:
% nc -v 55.35.300.45 80
nc: connectx to 52.38.300.43 port 80 (tcp) failed: Connection refused
Given that your browser gave you a similar "connection refused", most likely the problem is that there is no web server running on your instance. You can verify this by ssh'ing into the instance and seeing if you can connect to port 80 there:
ssh ec2-user#55.35.300.45
% nc -v localhost 80
nc: connect to localhost port 80 (tcp) failed: Connection refused
If you get something like the above, you're definitely not running a webserver.
I'm not sure if it's too late to help but I was stuck with a similar issue with my test server
SG Inbound: ssh -> 22
HTTP -> 80
NACL: default allow/deny settings
but still couldn't ping to the server from my browser, then I realize there's nothing running on the server that can serve the request, and I started httpd server (webserver) and it worked.
sudo yum -y install httpd
sudo service httpd start
this way you can test the connectivity if you are playing with SGs and NACLs and of course it's not the only way, just an example if you're figuring your System N/W out.
Have you installed webserver(ngingx/apache) to serve your requests. If so please share your the config files. (So that it will help to troubleshoot)
I think the reason is probably that you did not set up a web server for your EC2 instance, because if you try to access http://{PUBLIC_IP} or https://{PUBLIC_IP}, you need to have a background server to serve the http request as #Niranj Rajasekaran said.
By the way, by simply pinging the {PUBLIC_IP}, you could see if your connection to your EC2 instance is normal or not.
In command prompt or terminal, type
ping {PUBLIC_IP}
In my case, the server was running but available on just 127.0.0.1 so it refused connections from external hosts. To see if this is your situation, you can run
netstat -an | grep <port number>
If it says 127.0.0.1:<port number> instead of 0.0.0.0:<port number>, you have this problem.
Usually there's a flag or an argument in your server code somewhere to set the host to 0.0.0.0:
app.run(host='0.0.0.0') # flask example
However, in my case, I had already set this so I thought that couldn't possibly be the issue, which is how I ended up on this thread, which asks more generally about the problem. Unfortunately, I was using docker, and had set 0.0.0.0 on the container but was mapping that explicitly to 127.0.0.1 on the host in the docker-compose port-mapping:
ports:
- "127.0.0.1:<port number>:<port number>"
Changing that line to remove the host IP specification fixed the problem upon re-deploy:
ports:
- "<port number>:<port number>"

Resources