access hbase in IDE Eclipse , java.net.UnknownHostException - hadoop

When I write the java code to access hbase in IDE Eclipse, the messages "java.net.UnknownHostException" are always been shown.But hbase shell works well.
I install the hadoop and hbase on a single linux node in pseudo distribution mode. And my hostname is yzd. Here are the /etc/hosts and hbase-site.xml:
/etc/hosts:
127.0.0.1 localhost yzd
hbase-site.xml:
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
Error message:
INFO [main] (HBaseRPC.java:117) - Using org.apache.hadoop.hbase.ipc.WritableRpcEngine for org.apache.hadoop.hbase.ipc.HMasterInterface
INFO [main] (HConnectionManager.java:596) - getMaster attempt 0 of 10 failed; retrying after sleep of 1000
java.net.UnknownHostException: unknown host: � 13846#yzdlocalhost
at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.<init>(HBaseClient.java:224)
at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:954)
at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:816)
at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:141)
at com.sun.proxy.$Proxy4.getProtocolVersion(Unknown Source)
at org.apache.hadoop.hbase.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:174)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:295)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:272)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:324)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:579)
at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:94)
at com.hbasebook.hush.schema.SchemaManager.process(SchemaManager.java:126)
at com.hbasebook.hush.HushMain.main(HushMain.java:57)

Check the version of your local hbase matches the one you are using as a dependency in your pom. This should solve your issue. I was facing the same issue, I was using hbase in standalone mode. I hope this helps you.

First of all yzd is not host name, its domain name (You should prefer FQDN). Now this line
java.net.UnknownHostException: unknown host: � 13846#yzdlocalhost
clearly says that 13846#yzdlocalhost host is not there. Now you can do followings:
Use IP address instead of hostname in both hbase-site.xml and core-site.xml and check
Then use FQDN in etc/hosts file and tab-separate the values, now you can replace the IP with FQDN

Related

Hadoop is configured to use IP address but still connect to localhost

I have configured Hadoop to use IP address instead of localhost but when I started it, it still posted an error that can not ssh to localhost.
hdfs-site.xml:
<configuration>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///<hdfs_path>/hdfs/datanode</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///<hdfs_path>/hdfs/namenode</value>
</property>
</configuration>
core-site.xml:
<name>fs.defaultFS</name>
<value>hdfs://<IP_address>/</value>
<description>NameNode URI</description>
When I started Hadoop, the below error appeared:
localhost: ssh_exchange_identification: Connection closed by remote host
When I tried to allow shh to localhost in our test server, the error disappeared. But problem is that in our production server, ssh to localhost is not allowed. I tried to avoid using localhost and use IP address instead, but why Hadoop still needs to ssh to localhost?
Only the start-dfs, start-yarn, or start-all shell scripts use SSH, and the SSH connection has nothing to do with the XML files.
That's not required; you can run hadoop namenode and hadoop datanode, and the YARN daemon processes directly. However, you'll still need to somehow get into a shell of each machine to run those commands, if they don't start at boot or ever fail to (re)start

Hadoop: How to access a HDFS from an external IP?

Internally (i.e. internal network; private IP address-to-private IP address), I can access my HDFS just fine using:
hdfs dfs -ls hdfs://#.#.#.#/
However, when I try the same from a machine outside the network on which the HDFS namenode resides (obviously using the namenode machine's WAN IP instead of its LAN IP), I get:
ls: DestHost:destPort ec2-▒-▒-▒-▒.compute-1.amazonaws.com:8020 , LocalHost:localPort mymachine/127.0.0.1:0. Failed on local exception: java.io.IOException: Connection reset by peer
The namenode log reads:
INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for port 8020: readAndProcess from client ▒.▒.▒.▒:▒ threw exception [java.io.IOException: Connection reset by peer]
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:377)
at org.apache.hadoop.ipc.Server.channelRead(Server.java:3486)
at org.apache.hadoop.ipc.Server.access$2600(Server.java:138)
at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:2144)
at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:1389)
at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:1245)
at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:1216)
My core-site.xml reads:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://0.0.0.0:8020</value>
</property>
</configuration>
Note that I have also tried setting the fs.defaultFS value to hdfs://#.#.#.#:8020. I have also tried setting it to hdfs://hadoophost:8020, and adding #.#.#.# hadoophost to the top of /etc/hosts. (#.#.#.# is obviously the LAN IP of the namenode's machine in both cases.) The results have been the same.
My hdfs-site.xml reads:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/datanode</value>
</property>
</configuration>
Note that I am able to telnet externally to the namenode's machine on port 8020 just fine.
What setting(s) am I missing to enable external access to my Hadoop file system?

hadoop datanode unable to start. "does not contain a valid host:port authority"

I'm currently using hadoop 1.2.1 (because I need to run a spatial processing software only support this version). I'm trying to deploy in multinode mode with one master and three slaves.
I'm sure I'm able to ssh between all master and slaves without password (including themselves). Also the hostname on each node is correct.
Each node shares the same host file:
192.168.56.101 master
192.168.56.102 slave1
192.168.56.103 slave2
192.168.56.104 slave3
I keep having problems in the slaves node, error log info is as follows,
2015-05-21 23:39:16,841 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: file:///
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:181
Configurations in core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
</configuration>
In mapred-site.xml:
<configuration>
<property>
<name>mapred.job.tracter</name>
<value>master:8012</value>
</property>
</configuration>
In hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration
There could be a problem with the naming convention of your node hostnames.
Make sure they do not contain symbols like "_".
Check Wikipedia for restrictions.
Try to change the "master" to the actual ip address, in all your config files.
You configed OK. You need run command "$HADOOP_HOME/bin/hdfs namenode -format master", after run command "$HADOOP_HOME/sbin/start-dfs"

regarding hbase running on hadoop in distributed mode

Hadoop version=2.4.1
hbase version=0.98.6
i have hadoop up and running prefectly fine on below conf:
107.108.86.119-hadoop namenode,SecondaryNameNode
107.109.155.100-datanode1
107.109.155.102-datanode2
now i install hbase as below conf:-
107.108.86.114:-hmaster,HQuorumPeer
107.109.155.100-regionserver1
107.109.155.102-regionserver2
when i do jps following process are running:
107.109.155.102:-hregionserver,datanode
107.109.155.100:-hregionserver,datanode
107.108.86.119:-NameNode,secondaryNameNode
107.108.86.114:-hmaster
but on doing status on hbase shell is showing "0 servers, 0 dead, NaN average load"
on entering cmd on hbase shell showing ERROR: java.io.IOException: Table Namespace Manager not ready yet, try again later
logs on regionserver showing:
regionserver.HRegionServer: reportForDuty to master=localhost,60000,1415007213689 with port=60020, startcode=1415007215055
regionserver.HRegionServer: error telling master we are up
my hbase-site.xml-
<property>
<name>hbase.master</name>
<value>107.108.86.114:60000</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://push-mcd2:54310/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>107.108.86.114</value>
</property>
while /etc/hosts of hmaster is:
127.0.0.1 localhost arpita-ubuntu
127.0.1.1 arpita-ubuntu
107.109.155.100 push-ws1
107.109.155.102 push-ws2
107.108.86.114 push-mcd1
107.108.86.119 push-mcd2
WHILE slaves file are also almost similiar to above one.
conf/hbase-env.sh
export JAVA_HOME=/usr/lib/jvm/java-6-sun-1.6.0.22 export HBASE_CLASSPATH=/home/hadoop/hadoop-0.20.2/conf export HBASE_MANAGES_ZK=true
so what change i make so hbase will run on above cluster
Why does your regionserver log mentions that it is looking for HBase Master on localhost?
Form information above you have setup Master on a node different for either regionservers, please check your config is correct on each node.
logs on regionserver showing: regionserver.HRegionServer:
reportForDuty to master=localhost,60000,1415007213689 with port=60020,
startcode=1415007215055 regionserver.HRegionServer: error telling
master we are up
Also in /etc/hosts on each node please update first two lines from
127.0.0.1 localhost arpita-ubuntu
127.0.1.1 arpita-ubuntu
to
127.0.0.1 localhost
<Actual_IP_Address_for_Host> arpita-ubuntu
This is necessary if you don't have automatic dns name resolution in place.
Also please use IP instead of localhost in all config settings.
If you still face issues, check if the respective ports are open or not.
Hope this helps you.

HBase binding to an incorrect address

I'm attempting to run running HBase in pseudo-distributed mode. I have followed all of the steps in the tutorial.
My hbase-site.xml looks like this:
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
My regionservers looks like this (default):
localhost
In the logs, Zookeeper starts OK, MiniZK starts OK, then I get a BindException with this being the culprit:
Caused by: java.net.BindException: Problem binding to /192.168.0.1:0 : Cannot assign requested address
Where in the world did it get the address 192.168.0.1? And why is it trying to bind to port 0? That IP is my NAT gateway. The IP address of the machine it's on is 192.168.0.200.
I have looked in all of the config files but don't see anywhere that I would specify that address.
** UPDATE **
It looks like the problem was that HBase was trying to reverse-lookup my IP address by my hostname which-- because I'm using my router as a DNS-- resolved to ... my router.
When I add an "alias" in the /etc/hosts file to 127.0.0.1 it resolves just fine.
#arnon-rotem-gal-oz, I just installed whatever came in the HBase tarball. I'm assuming miniZK is a scaled-down version of Zookeeper? I'm not running a separate instance of it.
The code you posted did the trick to resolve the next problem that came up.
Check the zookeeper configuration file (zoo.cfg in the zookeeper/conf directory)
Also why do you have both zookeeper and miniZK?
Also (not directly related to your question) you need to tell hbase where to find the zookeeper e.g. adding the following to your hbase-site.xml
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>

Resources