Google Chrome can find WSL2 Docker containers, while WSL2 Ubuntu can't? - windows

My setup is as follows:
Windows 10 host
WSL2
Ubuntu 20.04 LTS
Nginx Proxy Manager container
Some nginx containers
I have added a bunch of proxy's to the nginx proxy manager. When I visit these URLs (e.g. api.localhost) with Google Chrome (on Windows 10 host) I can see the correct pages.
When I ping on the windows 10 host to one of the url domains. It cannot find the host.
When I use something like curl api.localhost on the WSL2 ubuntu shell, it also cannot find the host.
Why can chrome find the host at all?
Do I need to add host line entries for all my domains? Both on WSL2 and Windows. Or is there an easier way?
Windows host files:
# Added by Microsoft Windows
127.0.0.1 localhost
::1 localhost
# Added by Docker Desktop
192.168.178.213 host.docker.internal
192.168.178.213 gateway.docker.internal
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
And...
Ubuntu host file:
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /># [network]
# generateHosts = false
127.0.0.1 localhost
127.0.1.1 DESKTOP-TL66TDU.localdomain DESKTOP-TL66TDU
127.0.0.1 localhost
::1 localhost
192.168.178.213 host.docker.internal
192.168.178.213 gateway.docker.internal
127.0.0.1 kubernetes.docker.internal
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Related

browsers losing access to localhost after a few minutes in windows

I've been running a local docker-compose stack for at least 2 years now, on windows, through WSL2/Ubuntu.
It has been working fine up until a couple of weeks ago when I noticed that after a few minutes of the stack being up and accessible through the browser, all my localhost:PORT urls start showing an ERR_CONNECTION_REFUSED error in chrome (as well as other browsers).
All my services are still available through curl on wsl2/ubuntu, but not through curl/powershell.
I can also ngrok them to a public IP address and they work fine.
I have spent multiple hours trying to figure out what could be happening: tried a few things like disabling the windows firewall, to no avail. those ports on 127.0.0.1 or 0.0.0.0 are not reachable either.
my /etc/hosts reads:
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:
# [network]
# generateHosts = false
127.0.0.1 localhost
127.0.1.1 MACHINENAME. MACHINENAME
10.0.110.184 host.docker.internal
10.0.110.184 gateway.docker.internal
127.0.0.1 kubernetes.docker.internal
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
I was wondering if anyone had a clue about what could be happening in this situation.

Pod unable to curl localhost

There is an elastic search container running on localhost on port 9200, but from a pod on the same host, I'm unable to curl the localhost port 9200
[root#jenkins ~]# netstat -tupln | grep 9200
tcp6 0 0 :::9200 :::* LISTEN 4148/docker-proxy
[jenkins#kb-s-9xttg agent]$ curl http://localhost:9200
curl: (7) Failed to connect to ::1: Network is unreachable
/etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
192.168.255.23 kbs-9xttg
I am able to curl public_host_ip:9200
elastic-search container is not managed by Kubernetes but running on the same host.
Why is the pod unable to talk to localhost:9200 or 127.0.0.1:9200 ?
Summary from the comments:
If you're talking to localhost from within a Pod, you're only talking to the containers inside that Pod.
Containers in different Pods have distinct IP addresses and can not communicate by using localhost. They might however be exposing their own port on your local host network, similar to what your Docker container is doing (which is why you can communicate from your local node using localhost).
Inside your cluster you can use the Pod IPs, but if you want to talk to your host you need to use host networking for your Pod
spec:
hostNetwork: true
or the external IP of your host.
More on Kubernetes Networking in the docs and this blog post.

Heartbeating to <hostname>:7182 failed during Cloudera Installation on 3 node cluster

I am creating a 3 node cloudera cluster using Cloudera Manager.I followed the cloudera document :
[1]https://www.cloudera.com/documentation/enterprise/latest/topics/cm_ig_install_path_b.html#concept_wkg_kpb_pn
After login to cloudera manager and entering the hostnames of the 3 nodes, when I try to install it gives the below message:
Installation failed. Failed to receive heartbeat from agent.
Ensure that the host's hostname is configured properly.
Ensure that port 7182 is accessible on the Cloudera Manager Server (check firewall rules).
Ensure that ports 9000 and 9001 are not in use on the host being added.
Check agent logs in /var/log/cloudera-scm-agent/ on the host being added. (Some of the logs can be found in the installation details).
If Use TLS Encryption for Agents is enabled in Cloudera Manager (Administration -> Settings -> Security), ensure that /etc/cloudera-scm-agent/config.ini has use_tls=1 on the host being added. Restart the corresponding agent and click the Retry link here.
I checked the agent logs and it has error messassge :Heartbeating to hostname:7182 failed during Cloudera Installation on 3 node cluster.
where hostname is the external IP of my node
I checked that the inbound port 7182 is open and also verified that tls is set to 1.
I checked the /etc/hosts and it has the below entries:
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
Please advice whether the /etc/hosts file has to be changed and what should I replace the content with?
Resolution: When the installation got stopped and it restarted all over once again. I did two things :
1) Disabled firewall by doing iptables -P INPUT ACCEPT iptables -P OUTPUT ACCEPT iptables -P FORWARD ACCEPT iptables -F .
2) The second thing is giving internal IP instead of external IP while adding hosts.
It worked fine this time and gave no errors.

Installing cloudera manager in ubuntu 14.04/64b

I am installing Cloudera Manager in my system(14.04/64b).
While installing at the final step, before finish installation I got some ERRORs in validation as shown below,
errors in above page are,
ERROR 1
Individual hosts resolved their own hostnames correctly.
Host localhost expected to have name localhost but resolved (InetAddress.getLocalHost().getHostName()) itself to arul-pc.
ERROR 2
The following errors were found while checking /etc/hosts...
The hostname localhost is not the first match for address 127.0.0.1
in /etc/hosts on localhost. Instead, arul-pc is the first match. The
FQDN must be the first entry in /etc/hosts for the corresponding IP.
In /etc/hosts on localhost, the IP 127.0.0.1 is present multiple times. A given IP should only be listed once.
How to solve these 2 errors ?
Note(info)::
In my /etc/hosts,
127.0.0.1 localhost
127.0.0.1 arul-pc
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Note 2(my try)::
I tried to avoid arul-pc from /etc/hosts/ as,
127.0.0.1 localhost
#127.0.0.1 arul-pc
after saved and Run again, Error 2 cleared, but Error 1 become as,
Individual hosts resolved their own hostnames correctly.
Host localhost failed to execute InetAddress.getLocalHost() with error: arul-pc: arul-pc. This typically means that the hostname could not be resolved.
It picks value from hostname -f command. You need to update /etc/sysconfig/network with appropriate hostname.

How to setup Cloudera Hadoop to run on Ubuntu localhost?

I am trying to run all Hadoop servers on a single Ubuntu localhost. All ports are open and my /etc/hosts file is
127.0.0.1 frigate frigate.domain.local localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
When trying to install cluster Cloudera manager fails with the following messages:
Installation failed. Failed to receive heartbeat from agent.
Ensure that the host's hostname is configured properly.
Ensure that port 7182 is accessible on the Cloudera Manager server (check firewall rules).
Ensure that ports 9000 and 9001 are free on the host being added.
Check agent logs in /var/log/cloudera-scm-agent/ on the host being added (some of the logs can be found in the installation details).
I run my Ubuntu-12.04 node from home connected by Wifi/dialup modem to my provider. What configuration is missing?
Add your pc ip to /etc/hosts file like
# IP pc_name
192.168.3.4 my_pc
It will solve the issue.
this config is ok for me. hope helpful.
127.0.0.1 localhost.localdomain localhost
https://groups.google.com/a/cloudera.org/forum/?fromgroups=#!msg/cdh-dev/9rczbLkAxlg/lF882T7ZyNoJ
my pc: (ubuntu12.04.2 64bit vm)

Resources