How to enable remote access to Master's proxy - "Failed connect to <ip>:8080; Connection refused" - proxy

I am playing with Kubernetes on https://labs.play-with-k8s.com.
I tried to use the kubectl proxy following the instructions in Kubernete's website.
On the Master node (192.168.0.13) I ran: kubectl proxy --port=8080:
[node1 ~]$ kubectl proxy --port=8080
Starting to serve on 127.0.0.1:8080
On the Worker node I ran curl -v http://192.168.0.13:8080 and it failed:
[node2 ~]$ curl -v http://192.168.0.13:8080
* About to connect() to 192.168.0.13 port 8080 (#0)
* Trying 192.168.0.13...
* Connection refused
* Failed connect to 192.168.0.13:8080; Connection refused
* Closing connection 0
curl: (7) Failed connect to 192.168.0.13:8080; Connection refused
Any idea why the connection is refused ?

Starting to serve on 127.0.0.1:8080
As shown in the message it emits on startup, that is because kubectl proxy only listens on localhost (i.e. 127.0.0.1), unless you instruct it otherwise:
kubectl proxy --address=0.0.0.0 --accept-hosts='.*'
and that --accept-hosts business is a regular expression for hosts (presumably Referer headers? DNS lookups?) from which kubectl will accept connections, and .* is a regex that matches every string including the empty ones.

Related

Docker issue during gatling performance test

I have a spring boot application, and I run a performance test on it, using Gatling.
The issue is that after a few requests where everything works OK, the server returns connection refused and no other requests are working.
Gatling log looks like this:
---- Requests ------------------------------------------------------------------
> Global (OK=14 KO=1001 )
> POST /template (OK=13 KO=938 )
> PUT /feedback (OK=1 KO=63 )
---- Errors --------------------------------------------------------------------
> j.n.ConnectException: Connection refused: no further informati 577 (57,64%)
on
> j.i.IOException: Premature close 240 (23,98%)
> j.n.c.ClosedChannelException 184 (18,38%)
When I create a manual request using curl, returns:
$ curl https://localhost:8087
curl: (7) Failed to connect to localhost port 8087: Connection refused
If I connect to docker and do the request:
$ docker exec -it web /bin/bash
root#794f9e808f14:/# curl https://localhost:8443
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.se/docs/sslcerts.html
SSL handshake failed, as expected, but this means that the server is up an running.
The port is mapped in docker:
$ docker port web
8443/tcp -> 0.0.0.0:8087
8443/tcp -> :::8087
After restart, all thing happen again.
I'm using docker on a WSL Ubuntu. Not sure if this matters too much. What can I do to make this connection more stable?

how to resolve micok8s port forwarding error on vagrant VMs?

have a 2 node microk8s cluster running on 2 Vagrant VMs (Ubuntu 20.04). trying to forward port forward 443 from host so I can connect to dashboard from the host PC over the private VM network.
sudo microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443
receive the following error:
error: error upgrading connection: error dialing backend: dial tcp: lookup node-1: Temporary failure in name resolution
also noticed that the internal IPs for the nodes are not correct:
the master node is provisioned with an IP of 10.0.1.5 and the worker node 10.0.1.10. in the listing from kubectl both nodes have the same IP of 10.0.2.15.
not sure how to resolve this issue.
note I am able to access the dashboard login screen from http and port 8001 connecting to 10.0.1.5. but submitting the token does not do anything as per the K8s security design:
Logging in is only available when accessing Dashboard over HTTPS or when domain is either localhost
or 127.0.0.1. It's done this way for security reasons.
was able to get passed this issue by adding the nodes to the /etc/hosts file on each node:
10.1.0.10 node-1
10.1.0.5 k8s-master
then was able to restart and issue the port forward command:
sudo microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443 --address 0.0.0.0
Forwarding from 0.0.0.0:10443 -> 8443
then was able to access the K8s dashboard via the token auth method

Docker with "--network host" does not allow to access to port of application

I have Spring Boot app inside docker and I am running my app like this:
docker run --rm --network host --name myapp1 myapp
But when i am trying to access it from host machine it fails:
my_machine:~ root$ curl localhost:8081/someendpoint -v
* Trying ::1...
* TCP_NODELAY set
* Connection failed
* connect to ::1 port 8081 failed: Connection refused
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connection failed
* connect to 127.0.0.1 port 8081 failed: Connection refused
* Failed to connect to localhost port 8081: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 8081: Connection refused
It is not clear for me - why it is not working ?
It works fine from inside of docker.
Also myapp have no problems with connection to external docker images/internet.
Please help.
From https://docs.docker.com/network/host/
The host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.
Since you said you were using macOS, --network host will not work properly. I believe the underlying reason is that, outside of Linux, a virtual machine is used to host the containers. The host whose network the container shares is the VM's, not the physical host's.

Error querying Consul agent: Get http://127.0.0.1:8500/v1/kv/vault?recurse=: dial tcp 127.0.0.1:8500: connect: connection refused

~]$ /apps/bin/consul/consul kv export vault
Error querying Consul agent: Get http://127.0.0.1:8500/v1/kv/vault?recurse=: dial tcp 127.0.0.1:8500: connect: connection refused
I'm trying to export entire vault/ folder from consul.
seeing the below error
Error querying Consul agent: Get http://127.0.0.1:8500/v1/kv/vault?recurse=: dial tcp 127.0.0.1:8500: connect: connection refused
You have to change the http/s address of Consul agent to which you're trying to connect. The default one is http://127.0.0.1:8500 and it doesn't work as you can see in the error.
To do this, set the following environment variable using the export command:
export CONSUL_HTTP_ADDR = <http/s_address_to_consul_agent>
If connecting via TLS, set also:
export CONSUL_CACERT = <path_to_cert_file>
Alternatively, you can set the above values as the parameters to consul kv export command:
consul kv export -http-addr=<http/s_address_to_consul_agent> -ca-file=<path_to_cert_file>

Can't connect to public IP for EC2 instance

I have an EC2 instance which is running with the following security groups:
HTTP - TCP - 80 - 0.0.0.0/0
Custom UDP Rule - UDP - 1194 - 0.0.0.0/0
SSH - TCP - 22 - 0.0.0.0/0
Custom TCP Rule - TCP - 943 - 0.0.0.0/0
HTTPS - TCP - 443 - 0.0.0.0/0
However, when I try to access http://{PUBLIC_IP} or https://{PUBLIC_IP} in the browser, I get a "{IP} refused to connect" error. I'm new to AWS. Am I missing something here? What should I do to debug?
One way to debug this particular class of problem is to use netcat in order to determine where the problem lies.
If you run netcat against port 80 on the public IP address of your instance and just get a hang (no output at all), then most likely your security group isn't allowing traffic through. Here is an example from an EC2 instance that is in a security group that doesn't allow port 80 traffic inbound:
% nc -v 55.35.300.45 80
<just hangs>
Whereas if the security group is changed to allow port 80, but the EC2 instance doesn't have any process listening on port 80, you'll get the following:
% nc -v 55.35.300.45 80
nc: connectx to 52.38.300.43 port 80 (tcp) failed: Connection refused
Given that your browser gave you a similar "connection refused", most likely the problem is that there is no web server running on your instance. You can verify this by ssh'ing into the instance and seeing if you can connect to port 80 there:
ssh ec2-user#55.35.300.45
% nc -v localhost 80
nc: connect to localhost port 80 (tcp) failed: Connection refused
If you get something like the above, you're definitely not running a webserver.
I'm not sure if it's too late to help but I was stuck with a similar issue with my test server
SG Inbound: ssh -> 22
HTTP -> 80
NACL: default allow/deny settings
but still couldn't ping to the server from my browser, then I realize there's nothing running on the server that can serve the request, and I started httpd server (webserver) and it worked.
sudo yum -y install httpd
sudo service httpd start
this way you can test the connectivity if you are playing with SGs and NACLs and of course it's not the only way, just an example if you're figuring your System N/W out.
Have you installed webserver(ngingx/apache) to serve your requests. If so please share your the config files. (So that it will help to troubleshoot)
I think the reason is probably that you did not set up a web server for your EC2 instance, because if you try to access http://{PUBLIC_IP} or https://{PUBLIC_IP}, you need to have a background server to serve the http request as #Niranj Rajasekaran said.
By the way, by simply pinging the {PUBLIC_IP}, you could see if your connection to your EC2 instance is normal or not.
In command prompt or terminal, type
ping {PUBLIC_IP}
In my case, the server was running but available on just 127.0.0.1 so it refused connections from external hosts. To see if this is your situation, you can run
netstat -an | grep <port number>
If it says 127.0.0.1:<port number> instead of 0.0.0.0:<port number>, you have this problem.
Usually there's a flag or an argument in your server code somewhere to set the host to 0.0.0.0:
app.run(host='0.0.0.0') # flask example
However, in my case, I had already set this so I thought that couldn't possibly be the issue, which is how I ended up on this thread, which asks more generally about the problem. Unfortunately, I was using docker, and had set 0.0.0.0 on the container but was mapping that explicitly to 127.0.0.1 on the host in the docker-compose port-mapping:
ports:
- "127.0.0.1:<port number>:<port number>"
Changing that line to remove the host IP specification fixed the problem upon re-deploy:
ports:
- "<port number>:<port number>"

Resources