Hadoop: error connecting to server, ipc.hbaserpc: 60020 - hadoop

"ipc.hbaserpc problem connecting to server 60020"
I have used Hadoop framework where I have one master device and several slave devices. My framework was working fine but this morning, I suddenly found that I am getting this error" and hadoop is not working.
I checked the ip of master, it hasn't changed.
**netstat -nlpt | grep 60020**
tcp6 0 0 172.17.13.29:60020 :::* LISTEN 2766/java
vi /etc/hosts
127.0.0.1 localhost
172.17.13.29 master
172.17.13.18 slave
127.0.1.1 bt.foo.org bt
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
vi /root/hbase/conf/regionservers
master
slave
~
~
~
~
Please guide me in the right direction as to where the problem might lie.
Here is the Complete Error Message as I see it:
root#master:~/hbase/bin# ./hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.94.6, r1456690, Thu Mar 14 16:32:05 PDT 2013
hbase(main):001:0> get 'profile', 'pranshu'
COLUMN CELL
13/04/22 10:05:04 INFO ipc.HBaseRPC: Problem connecting to server: master/172.17.13.29:60020
13/04/22 10:06:06 INFO ipc.HBaseRPC: Problem connecting to server: master/172.17.13.29:60020

Comment out the line "127.0.1.1 bt.foo.org bt" from the /etc/hosts file on all the machines and give it a retry.And it's always better to provide the complete error log.

Related

browsers losing access to localhost after a few minutes in windows

I've been running a local docker-compose stack for at least 2 years now, on windows, through WSL2/Ubuntu.
It has been working fine up until a couple of weeks ago when I noticed that after a few minutes of the stack being up and accessible through the browser, all my localhost:PORT urls start showing an ERR_CONNECTION_REFUSED error in chrome (as well as other browsers).
All my services are still available through curl on wsl2/ubuntu, but not through curl/powershell.
I can also ngrok them to a public IP address and they work fine.
I have spent multiple hours trying to figure out what could be happening: tried a few things like disabling the windows firewall, to no avail. those ports on 127.0.0.1 or 0.0.0.0 are not reachable either.
my /etc/hosts reads:
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:
# [network]
# generateHosts = false
127.0.0.1 localhost
127.0.1.1 MACHINENAME. MACHINENAME
10.0.110.184 host.docker.internal
10.0.110.184 gateway.docker.internal
127.0.0.1 kubernetes.docker.internal
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
I was wondering if anyone had a clue about what could be happening in this situation.

Google Chrome can find WSL2 Docker containers, while WSL2 Ubuntu can't?

My setup is as follows:
Windows 10 host
WSL2
Ubuntu 20.04 LTS
Nginx Proxy Manager container
Some nginx containers
I have added a bunch of proxy's to the nginx proxy manager. When I visit these URLs (e.g. api.localhost) with Google Chrome (on Windows 10 host) I can see the correct pages.
When I ping on the windows 10 host to one of the url domains. It cannot find the host.
When I use something like curl api.localhost on the WSL2 ubuntu shell, it also cannot find the host.
Why can chrome find the host at all?
Do I need to add host line entries for all my domains? Both on WSL2 and Windows. Or is there an easier way?
Windows host files:
# Added by Microsoft Windows
127.0.0.1 localhost
::1 localhost
# Added by Docker Desktop
192.168.178.213 host.docker.internal
192.168.178.213 gateway.docker.internal
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
And...
Ubuntu host file:
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /># [network]
# generateHosts = false
127.0.0.1 localhost
127.0.1.1 DESKTOP-TL66TDU.localdomain DESKTOP-TL66TDU
127.0.0.1 localhost
::1 localhost
192.168.178.213 host.docker.internal
192.168.178.213 gateway.docker.internal
127.0.0.1 kubernetes.docker.internal
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Heartbeating to <hostname>:7182 failed during Cloudera Installation on 3 node cluster

I am creating a 3 node cloudera cluster using Cloudera Manager.I followed the cloudera document :
[1]https://www.cloudera.com/documentation/enterprise/latest/topics/cm_ig_install_path_b.html#concept_wkg_kpb_pn
After login to cloudera manager and entering the hostnames of the 3 nodes, when I try to install it gives the below message:
Installation failed. Failed to receive heartbeat from agent.
Ensure that the host's hostname is configured properly.
Ensure that port 7182 is accessible on the Cloudera Manager Server (check firewall rules).
Ensure that ports 9000 and 9001 are not in use on the host being added.
Check agent logs in /var/log/cloudera-scm-agent/ on the host being added. (Some of the logs can be found in the installation details).
If Use TLS Encryption for Agents is enabled in Cloudera Manager (Administration -> Settings -> Security), ensure that /etc/cloudera-scm-agent/config.ini has use_tls=1 on the host being added. Restart the corresponding agent and click the Retry link here.
I checked the agent logs and it has error messassge :Heartbeating to hostname:7182 failed during Cloudera Installation on 3 node cluster.
where hostname is the external IP of my node
I checked that the inbound port 7182 is open and also verified that tls is set to 1.
I checked the /etc/hosts and it has the below entries:
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
Please advice whether the /etc/hosts file has to be changed and what should I replace the content with?
Resolution: When the installation got stopped and it restarted all over once again. I did two things :
1) Disabled firewall by doing iptables -P INPUT ACCEPT iptables -P OUTPUT ACCEPT iptables -P FORWARD ACCEPT iptables -F .
2) The second thing is giving internal IP instead of external IP while adding hosts.
It worked fine this time and gave no errors.

Installing cloudera manager in ubuntu 14.04/64b

I am installing Cloudera Manager in my system(14.04/64b).
While installing at the final step, before finish installation I got some ERRORs in validation as shown below,
errors in above page are,
ERROR 1
Individual hosts resolved their own hostnames correctly.
Host localhost expected to have name localhost but resolved (InetAddress.getLocalHost().getHostName()) itself to arul-pc.
ERROR 2
The following errors were found while checking /etc/hosts...
The hostname localhost is not the first match for address 127.0.0.1
in /etc/hosts on localhost. Instead, arul-pc is the first match. The
FQDN must be the first entry in /etc/hosts for the corresponding IP.
In /etc/hosts on localhost, the IP 127.0.0.1 is present multiple times. A given IP should only be listed once.
How to solve these 2 errors ?
Note(info)::
In my /etc/hosts,
127.0.0.1 localhost
127.0.0.1 arul-pc
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Note 2(my try)::
I tried to avoid arul-pc from /etc/hosts/ as,
127.0.0.1 localhost
#127.0.0.1 arul-pc
after saved and Run again, Error 2 cleared, but Error 1 become as,
Individual hosts resolved their own hostnames correctly.
Host localhost failed to execute InetAddress.getLocalHost() with error: arul-pc: arul-pc. This typically means that the hostname could not be resolved.
It picks value from hostname -f command. You need to update /etc/sysconfig/network with appropriate hostname.

How to setup Cloudera Hadoop to run on Ubuntu localhost?

I am trying to run all Hadoop servers on a single Ubuntu localhost. All ports are open and my /etc/hosts file is
127.0.0.1 frigate frigate.domain.local localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
When trying to install cluster Cloudera manager fails with the following messages:
Installation failed. Failed to receive heartbeat from agent.
Ensure that the host's hostname is configured properly.
Ensure that port 7182 is accessible on the Cloudera Manager server (check firewall rules).
Ensure that ports 9000 and 9001 are free on the host being added.
Check agent logs in /var/log/cloudera-scm-agent/ on the host being added (some of the logs can be found in the installation details).
I run my Ubuntu-12.04 node from home connected by Wifi/dialup modem to my provider. What configuration is missing?
Add your pc ip to /etc/hosts file like
# IP pc_name
192.168.3.4 my_pc
It will solve the issue.
this config is ok for me. hope helpful.
127.0.0.1 localhost.localdomain localhost
https://groups.google.com/a/cloudera.org/forum/?fromgroups=#!msg/cdh-dev/9rczbLkAxlg/lF882T7ZyNoJ
my pc: (ubuntu12.04.2 64bit vm)

Resources