Datanode denied communication with namenode because hostname cannot be resolved - hadoop

I ran a hadoop cluster in kubernetes, with 4 journalnodes and 2 namenodes. Sometimes, my datanodes cannot register to namenodes.
17/06/08 07:45:32 INFO datanode.DataNode: Block pool BP-541956668-10.100.81.42-1496827795971 (Datanode Uuid null) service to hadoop-namenode-0.myhadoopcluster/10.100.81.42:8020 beginning handshake with NN
17/06/08 07:45:32 ERROR datanode.DataNode: Initialization failed for Block pool BP-541956668-10.100.81.42-1496827795971 (Datanode Uuid null) service to hadoop-namenode-0.myhadoopcluster/10.100.81.42:8020 Datanode denied communication with namenode because hostname cannot be resolved (ip=10.100.9.45, hostname=10.100.9.45): DatanodeRegistration(0.0.0.0:50010, datanodeUuid=b1babba6-9a6f-40dc-933b-08885cbd358e, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-bceaa23f-ba3d-4749-a542-74cda1e82e07;nsid=177502984;c=0)
at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:863)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:4529)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1279)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:95)
at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28539)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
It says:
hadoop-namenode-0.myhadoopcluster/10.100.81.42:8020 Datanode denied communication with namenode because hostname cannot be resolved (ip=10.100.9.45, hostname=10.100.9.45)
However, I can ping hadoop-namenode-0.myhadoopcluster, 10.100.81.42, 10.100.9.45 in both the datanode and the namenode.
/etc/hosts in datanode:
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.100.9.45 hadoop-datanode-0.myhadoopcluster.default.svc.cluster.local hadoop-datanode-0
/etc/hosts in namenode:
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.100.81.42 hadoop-namenode-0.myhadoopcluster.default.svc.cluster.local hadoop-namenode-0
And I have already set dfs.namenode.datanode.registration.ip-hostname-check to false in hdfs-site.xml
I guess the problem may be related to dns. And in other similar problems, hadoop are not deployed in kubernetes or docker container, so I posted this one. Please do not tag it as duplicated...

In my situation, I included three configuration to the namenode and datanode as well:
dfs.namenode.datanode.registration.ip-hostname-check: false
dfs.client.use.datanode.hostname: false (default)
dfs.datanode.use.datanode.hostname: false (default)

I hope you found a resolution to the issue by now.
I ran into similar problem last week, but my cluster is set up in a different environment but the problem context is same.
Essentially , the reverse DNS lookup needs to be set up to solve this issue if the cluster is using a DNS Resolver then this needs to be set up at the DNS server level or if the Name Nodes are looking into /etc/hosts file to find Data Nodes then there needs to be any entry for the Data nodes there.
I have updated an old question in Hortonworks Community Forum Post,Link as below:
https://community.hortonworks.com/questions/24320/datanode-denied-communication-with-namenode.html?childToView=135321#answer-135321

Related

Conncection refused error in hadoop-2.6.0 while running word count program

I would like to know the root cause of this error. When I started running the conventional word count program in hadoop-2.6.0 the following exception is generated. your suggestions will be much appreciated.
16/03/16 11:21:40 INFO mapreduce.Job: map 0% reduce 0%
16/03/16 11:21:40 INFO mapreduce.Job: Job job_1458107299826_0002 failed with state FAILED due to: Application application_1458107299826_0002 failed 2 times due to Error launching appattempt_1458107299826_0002_000002. Got exception: java.net.ConnectException: Call From abc-OptiPlex-3020/127.0.1.1 to abc-OptiPlex-3020:52890 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
my /etc/hosts file is as follows: from master system
127.0.0.1 localhost
127.0.1.1 abc-OptiPlex-3020
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.12.106 HadoopMaster
192.168.12.105 HadoopSlave1
Try this
Install openssh server and client
ssh username#hostname

Error in Hbase Standalone mode

My standalone hbase starts, but when i do any command in shell it goes in errors...
I have tried many solutions, nothing worked :(
Error message:
hbase(main):001:0> create 'emp' 'data'
16/02/08 18:42:57 ERROR zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 3 retries
16/02/08 18:42:57 WARN zookeeper.ZKUtil: hconnection Unable to set watcher on znode (/hbase/hbaseid)
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:172)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:450)
.
.
.
.
My /etc/hosts:
> 127.0.0.1 localhost
> 127.0.0.1 ankit21
> 192.168.43.11 pooja
> 192.168.43.143 laptop#kalpesh
> 192.168.43.72 aditi
> # The following lines are desirable for IPv6 capable hosts(43.177)
> ::1 ip6-localhost ip6-loopback
> fe00::0 ip6-localnet
> ff00::0 ip6-mcastprefix
> ff02::1 ip6-allnodes
> ff02::2 ip6-allrouters
JPS:
ankit#ankit21:/usr/lib/hbase/hbase-0.94.8$ jps
Picked up JAVA_TOOL_OPTIONS: -javaagent:/usr/share/java/jayatanaag.jar
7609 HMaster
7756 Jps
Well, you need to start your zookeeper service. After that run jps command to check, you should be able to find "QuorumPeer" as a service for zookeeper.
Before starting the zookeeper service , make sure you have configured zoo.cfg in the conf directory. You need add the machine ip in zoo.cfg along with the port for zookeeper.

Hadoop yarn node list shows slaves as localhost.localdomain:#somenumber. connection refuse exception

I have got connection refuse exception from localhost.localdomain/127.0.0.1 to localhost.localdomain:55352 when trying to run wordcount program.
yarn node -list gives
hduser#localhost:/usr/local/hadoop/etc/hadoop$ yarn node -list
15/05/27 07:23:54 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.111.72:8040
Total Nodes:2
Node-Id Node-State Node-Http-Address Number-of-Running-Containers
localhost.localdomain:32991 RUNNING localhost.localdomain:8042 0
localhost.localdomain:55352 RUNNING localhost.localdomain:8042 0
master /etc/hosts:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
#127.0.1.1 ubuntu-Standard-PC-i440FX-PIIX-1996
192.168.111.72 master
192.168.111.65 slave1
192.168.111.66 slave2
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
slave /etc/hosts:
127.0.0.1 localhost.localdomain localhost
#127.0.1.1 ubuntu-Standard-PC-i440FX-PIIX-1996
192.168.111.72 master
#192.168.111.65 slave1
#192.168.111.66 slave2
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
What I understood is master is wrongly trying to connect to slaves on localhost. Please help me resolve this. Any suggestion is appreciated. Thank you.
Here is the code how NodeManager builds the NodeId:
private NodeId buildNodeId(InetSocketAddress connectAddress,
String hostOverride) {
if (hostOverride != null) {
connectAddress = NetUtils.getConnectAddress(
new InetSocketAddress(hostOverride, connectAddress.getPort()));
}
return NodeId.newInstance(
connectAddress.getAddress().getCanonicalHostName(),
connectAddress.getPort());
}
NodeManager tries to get the canonical hostname from the binding address, localhost will be gotten by given address 127.0.0.1.
So in your case, on the slave host, localhost.localdomain is the default host name for address 127.0.0.1, and the possible solution might be changing the first line of /etc/hosts on your slaves respectively to:
127.0.0.1 slave1 localhost.localdomain localhost
and
127.0.0.1 slave2 localhost.localdomain localhost

Hbase Regions server is not able to communicate with HMaster

I am not able to setup the hbase in distributed mode. It works fine when i setup it on one machine(standalone mode). My Zookeeper, hmaster and region server starts properly.
But when i go to hbase shell and look for the status. It shows me 0 region server. I am attaching my logs of regions server. Plus the host files of my master(namenode) and slave(datanode). I have tried every P&C which are given on stackoverflow for changing the host file, but didn't work for me.
2013-06-24 15:03:45,844 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server naresh-pc/192.168.0.108:2181. Will not attempt to authenticate using SASL (unknown error)
2013-06-24 15:03:45,845 WARN org.apache.zookeeper.ClientCnxn: Session 0x13f75807d960001 for server null, unexpected error, closing socket connection and attempting to reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:692)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
Slave /etc/hosts :
127.0.0.1 localhost
127.0.1.1 ubuntu-pc
#ip for hadoop
192.168.0.108 master
192.168.0.126 slave
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Master /etc/hosts :
127.0.0.1 localhost
127.0.1.1 naresh-pc
#ip for hadoop
192.168.0.108 master
192.168.0.126 slave
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
hbase-site.xml :
<configuration>
<property>
<name>hbase.master</name>
<value>master:60000</value>
<description>The host and port that the HBase master runs at.
A value of 'local' runs the master and a regionserver
in a single process.
</description>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:54310/hbase</value>
<description>The directory shared by region servers.</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed
Zookeeper true: fully-distributed with unmanaged Zookeeper
Quorum (see hbase-env.sh)
</description>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
<description>Property from ZooKeeper's config zoo.cfg.
The port at which the clients will connect.
</description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>master</value>
<description>Comma separated list of servers in the ZooKeeper Quorum.
For example,
"host1.mydomain.com,host2.mydomain.com".
By default this is set to localhost for local and
pseudo-distributed modes of operation. For a
fully-distributed setup, this should be set to a full
list of ZooKeeper quorum servers. If
HBASE_MANAGES_ZK is set in hbase-env.sh
this is the list of servers which we will start/stop
ZooKeeper on.
</description>
</property>
</configuration>
Zookeeper log:
2013-06-28 18:22:26,781 WARN org.apache.zookeeper.server.NIOServerCnxn: caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x13f8ac0b91b0002, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:722)
2013-06-28 18:22:26,858 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection for client /192.168.0.108:57447 which had sessionid 0x13f8ac0b91b0002
2013-06-28 18:25:21,001 INFO org.apache.zookeeper.server.ZooKeeperServer: Expiring session 0x13f8ac0b91b0002, timeout of 180000ms exceeded
2013-06-28 18:25:21,002 INFO org.apache.zookeeper.server.PrepRequestProcessor: Processed session termination for sessionid: 0x13f8ac0b91b0002
Master Log:
2013-06-28 18:22:41,932 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region servers count to settle; currently checked in 0, slept for 1502022 ms
2013-06-28 18:22:43,457 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region servers count to settle; currently checked in 0, slept for 1503547 ms
Remove 127.0.1.1 from hosts file and turn of IPV6. That should fix the problem.
Your Regionserver is looking for HMaster at naresh-pc but you do not have any such entry in your /etc/hosts file. Please make sure your configuration is proper.
Can you try all of this:
Make sure your /conf/regionservers file has just one entry: slave
Not sure what HBase version you are using, but instead of using port 54310 for hbase.rootdir property in your hbase-site.xml use port 9000
Your /etc/hosts file, on BOTH master and slave should should only have these custom entries:
127.0.0.1 localhost
192.168.0.108 master
192.168.0.126 slave
I am concerned that your logs state Opening socket connection to server naresh-pc/192.168.0.108:2181
Clearly the system thinks that zookeeper is on host naresh-pc, but in your config you are setting zookeeper quorum at host master, which HBase will bind to. That's a problem right there. In my experience, HBase is EXTREMELY fussy about host names, so make sure they are all in synch in all your configs and in your /etc/hosts file.
Also, this may be a minor issue, but wouldn't hurt to specify the zookeper data directory in your .xml file to have a minimum set of settings that should make the cluster work: hbase.zookeeper.property.dataDir

HBase exception

When I use HBase in pseudo cluster mode, I am getting the below exception. It will be really great if somebody could shed some light on this issue to resolve it
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=10, exceptions:
Wed Feb 06 15:22:23 IST 2013, org.apache.hadoop.hbase.client.ScannerCallable#29422384, java.io.IOException: java.io.IOException: Could not iterate StoreFileScanner[HFileScanner for reader reader=file:/home/688697/hbase/test/c28d92322c97364af59b09d4f4b4a95f/cf/c5de203afb5647c0b90c6c18d58319e9, compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], firstKey=0deptempname0/cf:email/1360143938898/Put, lastKey=4191151deptempname4191151/cf:place/1360143938898/Put, avgKeyLen=45, avgValueLen=7, entries=17860666, length=1093021429, cur=10275517deptempname10275517/cf:place/1360143938898/Put/vlen=4]
at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3004)
at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2951)
at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2968)
at org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2155)
at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1345)
Caused by: org.apache.hadoop.fs.ChecksumException: Checksum error: file:/home/688697/hbase/test/c28d92322c97364af59b09d4f4b4a95f/cf/c5de203afb5647c0b90c6c18d58319e9 at 37837312 exp: -819174049 got: 1765448374
at org.apache.hadoop.fs.FSInputChecker.verifySums(FSInputChecker.java:320)
at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:276)
at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:211)
at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:229)
at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:193)
at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:431)
at org.apache.hadoop.fs.FSInputChecker.seek(FSInputChecker.java:412)
at org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:48)
at org.apache.hadoop.fs.ChecksumFileSystem$FSDataBoundedInputStream.seek(ChecksumFileSystem.java:318)
at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1047)
at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
... 12 more
Wed Feb 06 15:22:24 IST 2013, org.apache.hadoop.hbase.client.ScannerCallable#29422384, java.io.IOException: java.io.IOException: java.lang.IllegalArgumentException
at org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:1079)
at org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:1068)
at org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2182)
at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1345)
Caused by: java.lang.IllegalArgumentException
at java.nio.Buffer.position(Buffer.java:216)
at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:395)
at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:326)
at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3004)
at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2951)
at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2968)
at org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2155)
... 5 more
The root cause of this issue resides in your /etc/hosts file. If you check your /etc/hosts file you will find a entry something like the one below (in my case mu machine is named domainnameyouwanttogive)
127.0.0.1 localhost
127.0.1.1 domainnameyouwanttogive
The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
The root cause is that domainnameyouwanttogive resolves to 127.0.1.1 which is incorrect as it should resolve to 127.0.0.1 (or a external IP). As my external IP is 192.168.58.10 I created the following /etc/hosts configuration;
127.0.0.1 localhost
192.168.43.3 domainnameyouwanttogive
The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
This will ensure that the resolving of your host processes on your localhost will be done correctly and you can start your HBase installation correctly on your development system.
And also please make sure that your hadoop namenode is running with same domain name you are using for hbase

Resources