hbase cannot connect to hadoop - hadoop

I am running an hdfs instance in pseudo-distributed mode, and tried to make another hbase instance connected to it on the same server. Logs in hadoop are fine, but I constantly got the connection failure in hbase' log
==================================================================================
2012-05-01 10:49:07,212 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181
2012-05-01 10:49:07,213 WARN org.apache.zookeeper.ClientCnxn: Session 0x13708dc552d0001 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119)
2012-05-01 10:49:08,882 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181
2012-05-01 10:49:08,882 WARN org.apache.zookeeper.ClientCnxn: Session 0x13708dc552d0001 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119)
==================================================================================
Configuration of core-site.xml#hadoop
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
Configuration of hbase-site.xml#hbase
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
I also tried to replace localhost with the actual ip of the server, but got the same error.

First, you need to make sure your hbase master node is running, you can use jps to check.
If it is not running, you can run it by start-hbase.sh command or hbase master start.
And then check its status by other commands, like netstat -an | grep 9000
Second, if the previous method does not work, check your firewall configuration such as iptables and SELinux.
Use sudo iptables -L to check your iptables configuration. You can disable the iptables by sudo service iptables stop command under redhat based linux systems.
Use getenforce to check if SElinux is in enforcing mode.
Third, check the system configuration, for example, ssh etc.

You need to replace the core hadoop jar in the $HBASE_HOME/bin/lib/hadoop-{{version}}core.jar with the one in the $HADOOP_HOME/hadoop-{{version}}core.jar
I was running into the same problem when I tried to install hbase 0.92 from hbase 0.90-xxx which was working fine, i replaced the hbase-env.sh and hbase-site.xml from the old hbase to the new but forgot to copy the hadoop core jar.

I'm always suspicious when I see localhost in a config. Also when you use localhost, then it becomes very difficult (to impossible) to access any services from the psuedo distributed system from a host other than the one you are running on.
You did say you tried the IP address, but you might want to make sure its the IP address that the node really thinks its at.
Check the zookeeper logs from when it starts up and see what IP address it "thinks" its at. There should be a line like:
2012-01-31 09:32:46,083 - INFO [main:Environment#97] - Server environment:host.name=ip-10-8-127-58.ec2.internal
Then use the value of host.name as the value for all places in ALL Hadodop, HBase, Hive, Zookeeper, etc config files that need a hostname (assuming they are all on the same machine as you said you are in psuedo-distributed mode)
Also you did not show your hbase.zookeeper.quorum setting in hbase-site.xml. That is where hbase gets its knowledge of the zookeeper's address
<property>
<name>hbase.zookeeper.quorum</name>
<value>ip-10-8-127-58.ec2.internal</value>
</property>

I think Hbase cannot find the zookeeper quorum, you have to set the hbase.zookeeper.quorum property in hbase-site.xml. Also check if classpath is set properly or not, check this doc out http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html#classpath

Related

Hadoop is configured to use IP address but still connect to localhost

I have configured Hadoop to use IP address instead of localhost but when I started it, it still posted an error that can not ssh to localhost.
hdfs-site.xml:
<configuration>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///<hdfs_path>/hdfs/datanode</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///<hdfs_path>/hdfs/namenode</value>
</property>
</configuration>
core-site.xml:
<name>fs.defaultFS</name>
<value>hdfs://<IP_address>/</value>
<description>NameNode URI</description>
When I started Hadoop, the below error appeared:
localhost: ssh_exchange_identification: Connection closed by remote host
When I tried to allow shh to localhost in our test server, the error disappeared. But problem is that in our production server, ssh to localhost is not allowed. I tried to avoid using localhost and use IP address instead, but why Hadoop still needs to ssh to localhost?
Only the start-dfs, start-yarn, or start-all shell scripts use SSH, and the SSH connection has nothing to do with the XML files.
That's not required; you can run hadoop namenode and hadoop datanode, and the YARN daemon processes directly. However, you'll still need to somehow get into a shell of each machine to run those commands, if they don't start at boot or ever fail to (re)start

Spark I/O error constructing remote block

I want to create an home-made spark cluster with two computer in the same network. The setup is the following:
A) 192.168.1.9 spark master with hadoop hdfs installed
Hadoop has this core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://0.0.0.0:9000</value>
</property>
</configuration>
B) 192.168.1.6 with spark only (slave)
From B I want to access to a file in A's hadoop hdfs using the spark command:
...
# Load files
file_1 = "input_1.pkl"
file_2 = "input_2.pkl"
hdfs_base_path = "hdfs://192.168.1.9:9000/folderx/"
sc.addFile(hdfs_base_path + file_1)
sc.addFile(hdfs_base_path + file_2)
# Get files back
with open(SparkFiles.get(file_1), 'rb') as fw:
// use fw
However, if I want to test the program in B, when I execute the program in B using the command:
./spark-submit --master local program.py
The output is the following:
17/07/25 19:02:51 INFO SparkContext: Added file hdfs://192.168.1.9:9000/bigdata/input_1_new_grid.pkl at hdfs://192.168.1.9:9000/bigdata/input_1_new_grid.pkl with timestamp 1501002171301
17/07/25 19:02:51 INFO Utils: Fetching hdfs://192.168.1.9:9000/bigdata/input_1_new_grid.pkl to /tmp/spark-838c3774-36ec-4db1-ab01-a8a8c627b100/userFiles-b4973f80-be6e-4f2e-8ba1-cd64ddca369a/fetchFileTemp1979399086141127743.tmp
17/07/25 19:02:51 WARN BlockReaderFactory: I/O error constructing remote block reader.
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
And later:
17/07/25 19:02:51 WARN DFSClient: Failed to connect to /127.0.0.1:50010 for block, add to deadNodes and continue. java.net.ConnectException: Connection refused
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
The program tries to access 127.0.0.1:50010, and it is wrong. Should I install hadoop also in B? If it is not necessary, what is the correct configuration? Thank you!
btw, just in case anyone comes to find some sort of solution, I fixed my issue by pointed quickstart.cloudera to real IP address instead of 127.0.0.1.
The default /etc/hosts is
127.0.0.1 quickstart.cloudera quickstart localhost localhost.domain
What you want is
127.0.0.1 localhost localhost.domain
xxxIP_Address_oF_YOUR_VM quickstart.cloudera quickstart
You may want to modify /usr/bin/cloudera-quickstart-ip as well, because every time you restart your VM, the hosts file may got reset again.

not able to connect remote hiveserver2 using ip address

I am runnnig hive service hiveserver2 with following config in hive-site.xml
<property>
<name>hive.server2.thrift.bind.host</name>
<value>ip_address</value>
</property>
<property>
<name>hive.server2.authentication</name>
<value>NONE</value>
</property>
From my local(same machine) jdbc client I am able to connect to hiveserver2
jdbc:hive2://id_address:10000/default
howerver when I run my jdbc client from other machine I am getting connection timeout issue.
Note that I am able to ping to hiveserver host with ip_address.
What could be issue? As I am using ip address still do I need to check entries in /etc/hosts?
Issue resolved
Issue was with firewall, hive configs were correct.
I disabled iptable for time being.
# services iptables stop
Following answer in following link was helpful for generic connection issues.
java.net.ConnectException :connection timed out: connect?

Fully Distributed HBase Error

I'm trying to setup HBase 0.96 to run on top of my Hadoop 2.2.0 cluster. I run start-hbase.sh and the master along with the regions startup. I can log into each region and see the processes running. However when check to see how many regions are up either through the web ui or a shell command I get a response of 0. Based on the logs it looks like the region servers are starting up not unable to notify the master that they are running. I confirmed that the master is listening on port 60000 and ports 60000 along with 60020 are both open. I've included my hbase-site file along with the logs from a region server.
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:9000/hbase</value>
<description>The directory shared by RegionServers.
</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed Zookeeper
true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
</description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>master</value>
</property>
<property>
<name>zookeeper.znode.parent</name>
<value>/master</value>
</property>
Log File:
2013-11-08 20:08:58,357 INFO [regionserver60020] regionserver.HRegionServer: reportForDuty to master=10.119.102.58,60000,1383941300240 with port=60020, startcode=1383941300420
2013-11-08 20:09:18,636 WARN [regionserver60020] regionserver.HRegionServer: error telling master we are up
com.google.protobuf.ServiceException: org.apache.hadoop.net.ConnectTimeoutException: 20000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connec$
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1667)
at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1708)
at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.regionServerStartup(RegionServerStatusProtos.java:5402)
at org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:1924)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:790)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.apache.hadoop.net.ConnectTimeoutException: 20000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending local=/100.65.$
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:532)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:573)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:858)
at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1532)
at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1421)
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1650)
... 5 more
2013-11-08 20:09:18,676 WARN [regionserver60020] regionserver.HRegionServer: reportForDuty failed; sleeping and then retrying.
I don't think the hbase.zookeeper.quorum is set correctly, which may cause the connection timeout. If you just wan't to test 0.96, start it in standalone mode and then make sure the zookeeper cluster is running before you change to distributed mode.
The HRegionServer complains that it can't connect to HMaster in order to report status (up).
It's probable that the HMaster process is not running so you may want to start it, or if you already started it to check the master log file.
Check your master server is listening on port 60000 by using the following command
netstat -l
tcp6 0 0 Vostro-350:60000 : LISTEN
if the server is listening on ipv6 then disabled it.
To disable you have to append the following to the file: /etc/sysctl.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
After reboot you should validate that IPV6 is really off by:
cat /proc/sys/net/ipv6/conf/all/disable_ipv6
(0 = IPV6 on ; 1 = IPV6 off)
Ref link : Connecting and Persisting to HBase

Hbase master not able to start

I am trying to start hbase master but getting the below error:
Could not start ZK at requested port of 2181. ZK was started at port: 2182. Aborting as clients (e.g. shell) will not be able to find this ZK quorum.
13/07/14 06:33:23 ERROR master.HMasterCommandLine: Failed to start master
java.io.IOException: Could not start ZK at requested port of 2181. ZK was started at port: 2182. Aborting as clients (e.g. shell) will not be able to find this ZK quorum.
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:134)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:103)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1684)
13/07/14 06:33:23 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:46283 (no session established for client)
hbase-site.xml
<configuration>
<!-- Changing the default port for REST since it conflicts with yarn nodemanager -->
<property>
<name>hbase.rest.port</name>
<value>8070</value>
<description>The port for the HBase REST server.</description>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:8020/hbase</value>
</property>
</configuration>
It seems like something else is already using port 2181, or perhaps you had started another ZK instance earlier on this port. Either stop that processe or change its port. If that's is not possible then set hbase.zookeeper.property.clientPort to 2182 in hbase-site.xml.
Please note that HBase needs ZK's services, even in standalone mode, so you should make sure that it's running OK.
HTH
Surprisingly, it can be an issue with privileges to connect to the port 2181, but not to 2182. Instead of ./start-hbase.sh try:
sudo ./start-hbase.sh
In my case it helped.
When starting Hbase in standalone mode, a single JVM hosts the HBase Master, an HBase RegionServer, and a ZooKeeper quorum peer. So, you don't need to start a ZK instance separately.
In your case, hbase is not able to start the ZK because another instance is probably already running on port 2181. So, just close that ZooKeeper instance and restart hbase. Also, do ensure proper permissions for the hbase rootdir.

Resources