There is an elastic search container running on localhost on port 9200, but from a pod on the same host, I'm unable to curl the localhost port 9200
[root#jenkins ~]# netstat -tupln | grep 9200
tcp6 0 0 :::9200 :::* LISTEN 4148/docker-proxy
[jenkins#kb-s-9xttg agent]$ curl http://localhost:9200
curl: (7) Failed to connect to ::1: Network is unreachable
/etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
192.168.255.23 kbs-9xttg
I am able to curl public_host_ip:9200
elastic-search container is not managed by Kubernetes but running on the same host.
Why is the pod unable to talk to localhost:9200 or 127.0.0.1:9200 ?
Summary from the comments:
If you're talking to localhost from within a Pod, you're only talking to the containers inside that Pod.
Containers in different Pods have distinct IP addresses and can not communicate by using localhost. They might however be exposing their own port on your local host network, similar to what your Docker container is doing (which is why you can communicate from your local node using localhost).
Inside your cluster you can use the Pod IPs, but if you want to talk to your host you need to use host networking for your Pod
spec:
hostNetwork: true
or the external IP of your host.
More on Kubernetes Networking in the docs and this blog post.
Related
My setup is as follows:
Windows 10 host
WSL2
Ubuntu 20.04 LTS
Nginx Proxy Manager container
Some nginx containers
I have added a bunch of proxy's to the nginx proxy manager. When I visit these URLs (e.g. api.localhost) with Google Chrome (on Windows 10 host) I can see the correct pages.
When I ping on the windows 10 host to one of the url domains. It cannot find the host.
When I use something like curl api.localhost on the WSL2 ubuntu shell, it also cannot find the host.
Why can chrome find the host at all?
Do I need to add host line entries for all my domains? Both on WSL2 and Windows. Or is there an easier way?
Windows host files:
# Added by Microsoft Windows
127.0.0.1 localhost
::1 localhost
# Added by Docker Desktop
192.168.178.213 host.docker.internal
192.168.178.213 gateway.docker.internal
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
And...
Ubuntu host file:
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /># [network]
# generateHosts = false
127.0.0.1 localhost
127.0.1.1 DESKTOP-TL66TDU.localdomain DESKTOP-TL66TDU
127.0.0.1 localhost
::1 localhost
192.168.178.213 host.docker.internal
192.168.178.213 gateway.docker.internal
127.0.0.1 kubernetes.docker.internal
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
I can't connect to ElasticSearch on my Digital Ocean droplet using my local machine's IP Address.
I got the IP Address by: Terminal > ipconfig getifaddr en0
- With that result, let's say: 100.888.777.99
I logged into my droplet by running: ssh username#111.222.3.444
Updated my UFW Rules by running: sudo ufw allow 9200 from 100.888.777.99
From my local machine I ran: curl -X GET 'http://111.222.3.444:9200'
And received: curl: (7) Failed to connect to 111.222.3.444 port 9200: Operation timed out
What am I doing wrong?
Things I've tried:
Changing the network.host variable in elasticsearch/elasticsearch.yml
network.host: 0.0.0.0 (also this a security risk since ip addresses are allowed )
Restarting the server
sudo /etc/init.d/elasticsearch restart
Adding more varibles to elasticsearch/elasticsearch.yml
transport.host: localhost
transport.tcp.port: 9300
http.port: 9200
I found that when I changed the UFW Rules to allow all connection to port 9200, I was able to connect to ElasticSearch from my local machine, but without that, it would not connect.
sudo ufw allow 9200
After some deep diving I found the issue was that the IP address that was returned by Terminal wasn't the correct one to use. I had to use the Public IP Address which I got from https://www.whatismyip.com/, you can also get this by:
Terminal > curl ifconfig.me
So when I removed the old UFW rule: 9200 ALLOW IN 100.888.777.99
And used the Public IP Address: sudo ufw allow 9200 from Public_IP_Address it connected.
I'm still not sure why my machine's IP Address doesn't work though...
I'm using Mac 10.13.6. I just installed elasticsearch via homebrew and launched it ...
brew services start elasticsearch
Service `elasticsearch` already started, use `brew services restart elasticsearch` to restart.
In my /usr/local/etc/elasticsearch/elasticsearch.yml configuration file, I have
http.port: 9200
However, when I attempt to see if that port is available, I get a connection refused ...
localhost:tmp davea$ telnet localhost 9200
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
telnet: Unable to connect to remote host
What port is elasticsearch getting launched on and how can I change that?
That may be the problem of Mac firewall? Edit elasticsearch.yml file in elasticsearch/config folder. Change the localhost to 127.0.0.1 to take a try?
network.host: 127.0.0.1
#
# Set a custom port for HTTP:
#
http.port: 9200
#
I am creating a 3 node cloudera cluster using Cloudera Manager.I followed the cloudera document :
[1]https://www.cloudera.com/documentation/enterprise/latest/topics/cm_ig_install_path_b.html#concept_wkg_kpb_pn
After login to cloudera manager and entering the hostnames of the 3 nodes, when I try to install it gives the below message:
Installation failed. Failed to receive heartbeat from agent.
Ensure that the host's hostname is configured properly.
Ensure that port 7182 is accessible on the Cloudera Manager Server (check firewall rules).
Ensure that ports 9000 and 9001 are not in use on the host being added.
Check agent logs in /var/log/cloudera-scm-agent/ on the host being added. (Some of the logs can be found in the installation details).
If Use TLS Encryption for Agents is enabled in Cloudera Manager (Administration -> Settings -> Security), ensure that /etc/cloudera-scm-agent/config.ini has use_tls=1 on the host being added. Restart the corresponding agent and click the Retry link here.
I checked the agent logs and it has error messassge :Heartbeating to hostname:7182 failed during Cloudera Installation on 3 node cluster.
where hostname is the external IP of my node
I checked that the inbound port 7182 is open and also verified that tls is set to 1.
I checked the /etc/hosts and it has the below entries:
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
Please advice whether the /etc/hosts file has to be changed and what should I replace the content with?
Resolution: When the installation got stopped and it restarted all over once again. I did two things :
1) Disabled firewall by doing iptables -P INPUT ACCEPT iptables -P OUTPUT ACCEPT iptables -P FORWARD ACCEPT iptables -F .
2) The second thing is giving internal IP instead of external IP while adding hosts.
It worked fine this time and gave no errors.
I am trying to run all Hadoop servers on a single Ubuntu localhost. All ports are open and my /etc/hosts file is
127.0.0.1 frigate frigate.domain.local localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
When trying to install cluster Cloudera manager fails with the following messages:
Installation failed. Failed to receive heartbeat from agent.
Ensure that the host's hostname is configured properly.
Ensure that port 7182 is accessible on the Cloudera Manager server (check firewall rules).
Ensure that ports 9000 and 9001 are free on the host being added.
Check agent logs in /var/log/cloudera-scm-agent/ on the host being added (some of the logs can be found in the installation details).
I run my Ubuntu-12.04 node from home connected by Wifi/dialup modem to my provider. What configuration is missing?
Add your pc ip to /etc/hosts file like
# IP pc_name
192.168.3.4 my_pc
It will solve the issue.
this config is ok for me. hope helpful.
127.0.0.1 localhost.localdomain localhost
https://groups.google.com/a/cloudera.org/forum/?fromgroups=#!msg/cdh-dev/9rczbLkAxlg/lF882T7ZyNoJ
my pc: (ubuntu12.04.2 64bit vm)