HBase data entry program not running properly - hadoop

Following is the code for data entry in HBase :
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.util.Bytes;
public class SimpleDataEntry {
public static void main(String[] args) throws IOException {
// Instantiating Configuration class
Configuration config = HBaseConfiguration.create();
// Instantiating HTable class
HTable hTable = new HTable(config, "emp");
// Instantiating Put class
// accepts a row name.
Put p = new Put(Bytes.toBytes("row1"));
// adding values using add() method
// accepts column family name, qualifier/row name ,value
p.add(Bytes.toBytes("personal"),
Bytes.toBytes("name"),Bytes.toBytes("raju"));
p.add(Bytes.toBytes("personal"),
Bytes.toBytes("city"),Bytes.toBytes("hyderabad"));
p.add(Bytes.toBytes("professional"),Bytes.toBytes("designation"),
Bytes.toBytes("manager"));
p.add(Bytes.toBytes("professional"),Bytes.toBytes("salary"),
Bytes.toBytes("50000"));
// Saving the put Instance to the HTable.
hTable.put(p);
System.out.println("data inserted");
// closing HTable
hTable.close();
}
}
The error we are getting on running this code is :
16/04/24 14:07:58 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/home/hadoop1/hadoop1/lib/native
16/04/24 14:07:58 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
16/04/24 14:07:58 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
16/04/24 14:07:58 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
16/04/24 14:07:58 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
16/04/24 14:07:58 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-123.el7.x86_64
16/04/24 14:07:58 INFO zookeeper.ZooKeeper: Client environment:user.name=hadoop1
16/04/24 14:07:58 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop1
16/04/24 14:07:58 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop1
16/04/24 14:07:58 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x5542c4ed0x0, quorum=localhost:2181, baseZNode=/hbase
16/04/24 14:07:58 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
16/04/24 14:07:58 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
16/04/24 14:07:58 WARN zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=localhost:2181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
16/04/24 14:07:59 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
16/04/24 14:07:59 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
The hbase-site.xml is as follows :
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
//Here you have to set the path where you want HBase to store its files.
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop-master:9000/hbase</value>
</property>
//Here you have to set the path where you want HBase to store its built in zookeeper files.
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/hadoop1/zookeeper</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2183</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>172.17.25.20</value>
</property>
</configuration>
What could be the possible problem and it's solution?

Errors in log indicate that hbase-site.xml doesn't loaded correctly. Check your hbase-site.xml: It must be on your classpath, because HbaseConfiguration.create() load config from path which you set on classpath(and try to add it to the beginning of classpath to prevent loading of hbase-site.xml from other jar in which similar config file was embedded)
Also, it seems that you use hbase-site.xml from Hbase server: all config keys except hbase.zookeeper.quorum is redundant and useless in client.

Configuration config = HBaseConfiguration.create(); Only creates an almost empty configuration file if java can not find hbase-site.xml.
To tell java where your conf file is, you can either put hbase-site.xml directly in your classpath, or you can call conf.addResource(**hbase-site path**)
Edit
As said in comment by Lagrang, try conf.set("hbase.zookeeper.quorum","172.17.25.20:2183")

Related

Run Hbase example in eclipse on ubuntu

I'm newbie in Hbase and Hadoop.
I'm setup Hadoop (1.2.1) and hbase (0.94.27) in pseudo mode in Ubuntu.
I'm also use habse shell to create or insert data to hbase table successfully.
But when I try to write a simple program to insert data to table by using Java API in Eclipse
public class HbaseTest {
public static void main(String[] args) throws Exception {
Configuration conf = HBaseConfiguration.create();
HBaseAdmin admin = new HBaseAdmin(conf);
try {
HTable table = new HTable(conf, "test-table");
Put put = new Put(Bytes.toBytes("test-key"));
put.add(Bytes.toBytes("cf"), Bytes.toBytes("q"), Bytes.toBytes("value"));
table.put(put);
} finally {
admin.close();
}
}
}
, I got this error following:
15/05/30 01:24:20 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=180000 watcher=hconnection0x0
15/05/30 01:24:20 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
15/05/30 01:24:20 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
15/05/30 01:24:21 WARN zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
15/05/30 01:24:21 INFO util.RetryCounter: Sleeping 2000ms before retry #1...
15/05/30 01:24:22 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
15/05/30 01:24:22 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
I tried to search similar error but not yet find solution so far.
Do anybody have experience with this problem? Please help me.
In addition, my hbase-site.xml like this
> <configuration> <property>
> <name>hbase.rootdir</name>
> <value>hdfs://localhost:54310/hbase</value>
> </property>
> <property>
> <name>hbase.zookeeper.property.dataDir</name>
> <value>/home/hduser/zookeeper</value>
> <description>Property from ZooKeeper's config zoo.cfg.
> The directory where the snapshot is stored.
> </description>
> </property> </configuration>
my /etc/hosts like this
> 127.0.0.1 localhost
> 127.0.0.1 testuser-VirtualBox
Thanks
Please look here for more references of your error in case you missed it. Happy coding :)

Hadoop HA setup : not able to connect to zookeeper

I am trying to set up Hadoop HA following the below article.
http://hashprompt.blogspot.in/2015/01/fully-distributed-hadoop-cluster.html
After the configuration, when I try to run
hdfs zkfc -formatZK
I get the following error.
15/03/30 12:18:14 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/opt/hadoop-2.6.0/lib/native
15/03/30 12:18:14 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
15/03/30 12:18:14 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
15/03/30 12:18:14 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
15/03/30 12:18:14 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
15/03/30 12:18:14 INFO zookeeper.ZooKeeper: Client environment:os.version=3.13.0-32-generic
15/03/30 12:18:14 INFO zookeeper.ZooKeeper: Client environment:user.name=huser
15/03/30 12:18:14 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/huser
15/03/30 12:18:14 INFO zookeeper.ZooKeeper: Client environment:user.dir=/opt/hadoop-2.6.0/sbin
15/03/30 12:18:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=mo-4594ddc63.mo.sap.corp:2181,mo-6dd5bf8b8.mo.sap.corp:2181,mo-e7b2822cb.mo.sap.corp:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef#4d9e68d0
15/03/30 12:18:14 INFO zookeeper.ClientCnxn: Opening socket connection to server mo-4594ddc63.mo.sap.corp/10.97.155.65:2181. Will not attempt to authenticate using SASL (unknown error)
15/03/30 12:18:14 INFO zookeeper.ClientCnxn: Socket connection established to mo-4594ddc63.mo.sap.corp/10.97.155.65:2181, initiating session
15/03/30 12:18:14 INFO zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
15/03/30 12:18:15 INFO zookeeper.ClientCnxn: Opening socket connection to server mo-e7b2822cb.mo.sap.corp/10.97.136.84:2181. Will not attempt to authenticate using SASL (unknown error)
15/03/30 12:18:15 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
15/03/30 12:18:15 INFO zookeeper.ClientCnxn: Opening socket connection to server mo-6dd5bf8b8.mo.sap.corp/10.97.156.12:2181. Will not attempt to authenticate using SASL (unknown error)
15/03/30 12:18:15 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
15/03/30 12:18:17 INFO zookeeper.ClientCnxn: Opening socket connection to server mo-4594ddc63.mo.sap.corp/10.97.155.65:2181. Will not attempt to authenticate using SASL (unknown error)
15/03/30 12:18:17 INFO zookeeper.ClientCnxn: Socket connection established to mo-4594ddc63.mo.sap.corp/10.97.155.65:2181, initiating session
15/03/30 12:18:17 INFO zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
15/03/30 12:18:17 INFO zookeeper.ClientCnxn: Opening socket connection to server mo-e7b2822cb.mo.sap.corp/10.97.136.84:2181. Will not attempt to authenticate using SASL (unknown error)
15/03/30 12:18:17 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
15/03/30 12:18:18 INFO zookeeper.ClientCnxn: Opening socket connection to server mo-6dd5bf8b8.mo.sap.corp/10.97.156.12:2181. Will not attempt to authenticate using SASL (unknown error)
15/03/30 12:18:18 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
15/03/30 12:18:19 ERROR ha.ActiveStandbyElector: Connection timed out: couldn't connect to ZooKeeper in 5000 milliseconds
15/03/30 12:18:19 INFO zookeeper.ClientCnxn: Opening socket connection to server mo-4594ddc63.mo.sap.corp/10.97.155.65:2181. Will not attempt to authenticate using SASL (unknown error)
15/03/30 12:18:19 INFO zookeeper.ClientCnxn: Socket connection established to mo-4594ddc63.mo.sap.corp/10.97.155.65:2181, initiating session
15/03/30 12:18:20 INFO zookeeper.ZooKeeper: Session: 0x0 closed
15/03/30 12:18:20 INFO zookeeper.ClientCnxn: EventThread shut down
15/03/30 12:18:20 FATAL ha.ZKFailoverController: Unable to start failover controller. Unable to connect to ZooKeeper quorum at mo-4594ddc63.mo.sap.corp:2181,mo-6dd5bf8b8.mo.sap.corp:2181,mo-e7b2822cb.mo.sap.corp:2181. Please check the configured value for ha.zookeeper.quorum and ensure that ZooKeeper is running.
After zookeeper installation(for which I followed http://rajsyrus.blogspot.sg/2014/04/configuring-hadoop-high-availability.html), I started the zookeeper service at each node with
./zkServer.sh start
command but then when I see status of it using
./zkServer.sh status
The followinf result happens
JMX enabled by default
Using config: /home/huser/zookeeper-3.4.6/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.
Which means may be it is not properly running.
Content of zoo.cfg
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/home/huser/zookeeper/data/
dataLogDir=/home/huser/zookeeper/log/
server.1=mo-4594ddc63.mo.sap.corp:2888:3888
server.2=mo-6dd5bf8b8.mo.sap.corp:2888:3888
server.3=mo-e7b2822cb.mo.sap.corp:2888:3888
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
content of core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://auto-ha</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>mo-4594ddc63.mo.sap.corp:2181,mo-6dd5bf8b8.mo.sap.corp:2181,mo-e7b2822cb.mo.sap.corp.hadoop.lab:2181</value>
</property>
</configuration>
Content of hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///hdfs/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///hdfs/data</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>auto-ha</value>
</property>
<property>
<name>dfs.ha.namenodes.auto-ha</name>
<value>nn01,nn02</value>
</property>
<property>
<name>dfs.namenode.rpc-address.auto-ha.nn01</name>
<value>mo-4594ddc63.mo.sap.corp:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.auto-ha.nn01</name>
<value>mo-4594ddc63.mo.sap.corp:50070</value>
</property>
<property>
<name>dfs.namenode.rpc-address.auto-ha.nn02</name>
<value>mo-6dd5bf8b8.mo.sap.corp:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.auto-ha.nn02</name>
<value>mo-6dd5bf8b8.mo.sap.corp:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://mo-4594ddc63.mo.sap.corp:8485;mo-6dd5bf8b8.mo.sap.corp:8485;mo-e7b2822cb.mo.sap.corp:8485/auto-ha</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/hdfs/journalnode</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/huser/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled.auto-ha</name>
<value>true</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>mo-4594ddc63.mo.sap.corp:2181,mo-6dd5bf8b8.mo.sap.corp:2181,mo-e7b2822cb.mo.sap.corp:2181</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.auto-ha</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
</configuration>
Any pointer to the error resolution would be of great help.
Regards,
Subhankar
EDIT
After doing what Rajesh mention in his answer, it seem to be working as there were no error. However, after setup, running the PI example shows the following error.
huser#mo-4594ddc63:~$ hadoop jar /opt/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar pi 8 10000
Number of Maps = 8
Samples per Map = 10000
15/03/31 13:23:08 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/huser/QuasiMonteCarlo_1427808186022_1353266286/in/part0 could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
at org.apache.hadoop.ipc.Client.call(Client.java:1468)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1532)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1349)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/huser/QuasiMonteCarlo_1427808186022_1353266286/in/part0 could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
at org.apache.hadoop.ipc.Client.call(Client.java:1468)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1532)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1349)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
15/03/31 13:23:08 ERROR hdfs.DFSClient: Failed to close inode 16390
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/huser/QuasiMonteCarlo_1427808186022_1353266286/in/part0 could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
at org.apache.hadoop.ipc.Client.call(Client.java:1468)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1532)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1349)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
Which seems like the datanodes are not running!!
Any pointer about what could be the error!
EDIT2
After several retry, I stopped everything and started all the node again. But seems now namenode02 is not starting. When I run the command hdfs haadmin -getServiceState nn02 I get this error Operation failed: Call From mo-4594ddc63/10.97.155.65 to mo-6dd5bf8b8 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: wiki.apache.org/hadoop/ConnectionRefused
Logs from NameNode02 which was not getting connected.
2015-03-30 12:58:04,837 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 10.97.155.65:60502 Call#229 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category JOURNAL is not supported in state standby
2015-03-30 12:58:52,094 INFO org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Triggering log roll on remote NameNode mo-4594ddc63.mo.sap.corp/10.97.155.65:8020
2015-03-30 12:58:52,103 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a roll of the active NN
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category JOURNAL is not supported in state standby
at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1719)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1350)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:6336)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:933)
at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:139)
at org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:11214)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
at org.apache.hadoop.ipc.Client.call(Client.java:1468)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy15.rollEditLog(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:145)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:271)
In Datanode, I found these logs
java.io.EOFException: End of File Exception between local host is: "mo-217e677f3.mo.sap.corp/10.97.168.28"; destination host is: "mo-4594ddc63.mo.sap.corp":8020; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy12.sendHeartbeat(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:139)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:582)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:680)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:850)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1071)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:966)
/etc/hosts file at each node
10.97.156.12 localhost
10.97.156.12 mo-6dd5bf8b8.mo.sap.corp mo-6dd5bf8b8
10.97.155.65 mo-4594ddc63.mo.sap.corp
#10.97.156.12 mo-6dd5bf8b8.mo.sap.corp
10.97.136.84 mo-e7b2822cb.mo.sap.corp
10.97.168.28 mo-217e677f3.mo.sap.corp
10.97.157.82 mo-fd6fa7b57.mo.sap.corp
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
::1 ip6-localhost ip6-loopback
fe00:: ip6-localnet
ff00:: ip6-mcastprefix
OS in each node : ubuntu 12.04
Change this in zoo.cfg:
server.1=mo-4594ddc63.mo.sap.corp:2888:3888
server.2=mo-6dd5bf8b8.mo.sap.corp:2888:3888
server.3=mo-e7b2822cb.mo.sap.corp:2888:3888
to
server.1=mo-4594ddc63.mo.sap.corp:2888:3888
server.2=mo-6dd5bf8b8.mo.sap.corp:2889:3889
server.3=mo-e7b2822cb.mo.sap.corp:2890:3890
Now start zookeeper and check the status.

Can not connect to HBase from Java Program

I am trying to connect to HBase standalone single cluster installed on my Ubuntu machine from Java Program.
I followed the steps given in this blog.
https://autofei.wordpress.com/2012/04/02/java-example-code-using-hbase-data-model-operations/
I am able to connect to HBase in EMR cluster when I run this code on AWS EC2 but not able to do it on my local. My hadoop is running and I am able to open hbase shell and scan 'storetable' is showing me some rows without any exception. It seems that the program is going in an infinite loop at line -
table = new HTable(HBaseConfig, "storetable");
because the message "HBase table created..." is never printed. There is no exception caught by the catch block.
Please help me out.
I appreciate your help.
Code:
public void connectHBase()
{
System.out.println("Trying to establish HBase connection...");
HBaseConfig = HBaseConfiguration.create();
HBaseConfig.set("hbase.zookeeper.quorum", "localhost");
HBaseConfig.set("hbase.zookeeper.property.clientPort", "2181");
System.out.println("HBase Connection succeded...");
try
{
System.out.println("Creating HBase table...");
table = new HTable(HBaseConfig, "storetable");
System.out.println("HBase table created...");
}
catch(Exception e)
{
System.out.println("Some exception occured...");
e.printStackTrace();
}
}
Console Output:
Trying to establish HBase connection...
HBase Connection succeded...
Creating HBase table...
15/03/29 16:53:37 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
15/03/29 16:53:37 INFO zookeeper.ZooKeeper: Client environment:host.name=localhost
15/03/29 16:53:37 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_76
15/03/29 16:53:37 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
15/03/29 16:53:37 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-7-oracle/jre
15/03/29 16:53:37 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/home/sankalp/workspace/AWSHadoopProject/bin:/home/sankalp/aws-java-sdk/1.9.27/lib/aws-java-sdk-1.9.27.jar:/home/sankalp/aws-java-sdk/1.9.27/third-party/javax-mail-1.4.6/javax.mail-api-1.4.6.jar:/home/sankalp/aws-java-sdk/1.9.27/third-party/joda-time-2.2/joda-time-2.2.jar:/home/sankalp/aws-java-sdk/1.9.27/third-party/jackson-annotations-2.3.0/jackson-annotations-2.3.0.jar:/home/sankalp/aws-java-sdk/1.9.27/third-party/freemarker-2.3.18/freemarker-2.3.18.jar:/home/sankalp/aws-java-sdk/1.9.27/third-party/httpcomponents-client-4.3/httpcore-4.3.jar:/home/sankalp/aws-java-sdk/1.9.27/third-party/httpcomponents-client-4.3/httpclient-4.3.jar:/home/sankalp/aws-java-sdk/1.9.27/third-party/jackson-core-2.3.2/jackson-core-2.3.2.jar:/home/sankalp/aws-java-sdk/1.9.27/third-party/commons-logging-1.1.3/commons-logging-1.1.3.jar:/home/sankalp/aws-java-sdk/1.9.27/third-party/spring-3.0/spring-context-3.0.7.jar:/home/sankalp/aws-java-sdk/1.9.27/third-party/spring-3.0/spring-beans-3.0.7.jar:/home/sankalp/aws-java-sdk/1.9.27/third-party/spring-3.0/spring-core-3.0.7.jar:/home/sankalp/aws-java-sdk/1.9.27/third-party/jackson-databind-2.3.2/jackson-databind-2.3.2.jar:/home/sankalp/aws-java-sdk/1.9.27/third-party/aspectj-1.6/aspectjweaver.jar:/home/sankalp/aws-java-sdk/1.9.27/third-party/aspectj-1.6/aspectjrt.jar:/home/sankalp/aws-java-sdk/1.9.27/third-party/commons-codec-1.6/commons-codec-1.6.jar:/home/sankalp/workspace/AWSHadoopProject/undertow-examples-1.2.0.Beta9.jar:/home/sankalp/workspace/AWSHadoopProject/commons-configuration-1.8.jar:/home/sankalp/workspace/AWSHadoopProject/commons-lang-2.6.jar:/home/sankalp/workspace/AWSHadoopProject/commons-logging-1.1.1.jar:/home/sankalp/workspace/AWSHadoopProject/hadoop-core-1.0.0.jar:/home/sankalp/workspace/AWSHadoopProject/hbase-0.92.1.jar:/home/sankalp/workspace/AWSHadoopProject/log4j-1.2.16.jar:/home/sankalp/workspace/AWSHadoopProject/slf4j-api-1.5.8.jar:/home/sankalp/workspace/AWSHadoopProject/slf4j-log4j12-1.5.8.jar:/home/sankalp/workspace/AWSHadoopProject/zookeeper-3.4.3.jar:/home/sankalp/workspace/AWSHadoopProject/json-simple-1.1.1.jar
15/03/29 16:53:37 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
15/03/29 16:53:37 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
15/03/29 16:53:37 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
15/03/29 16:53:37 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
15/03/29 16:53:37 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
15/03/29 16:53:37 INFO zookeeper.ZooKeeper: Client environment:os.version=3.16.0-33-generic
15/03/29 16:53:37 INFO zookeeper.ZooKeeper: Client environment:user.name=sankalp
15/03/29 16:53:37 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/sankalp
15/03/29 16:53:37 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/sankalp/workspace/AWSHadoopProject
15/03/29 16:53:37 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=180000 watcher=hconnection
15/03/29 16:53:37 INFO zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:2181
15/03/29 16:53:37 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
15/03/29 16:53:37 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 9146#skynet
15/03/29 16:53:37 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
15/03/29 16:53:37 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x14c6746b9f4000d, negotiated timeout = 40000
check your jar versions, use the same version of jars as in HBase you are trying to connect, also check your hosts file for loopback address.
I have faced this issue where create table was not working from java. In my case i haven't included hbasee-site into my class path.
You can refere below link.
Not able to create hbase using java

cannot connect to hbase because of zookeeper

for connect to hbase i write this code:
Class.forName("com.salesforce.phoenix.jdbc.PhoenixDriver");
conn = DriverManager.getConnection("jdbc:phoenix:localhost:2181");
but after running give me this errors:
13/08/22 09:14:14 INFO zookeeper.ZooKeeper:
Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:host.name=ubuntu
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_25
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:
java.vendor=Oracle Corporation
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:
java.home=/usr/local/jdk1.7.0_25/jre
13/08/22 09:14:14 INFO zookeeper.ZooKeeper:
Client environment:java.class.path=/home/ubuntu/Phonix/phoenix-2.0.0-client.jar:
/home/ubuntu/Downloads/hbql-0.90.0.1/hbql-0.90.0.1-src.jar:/home/ubuntu/Downloads/
hbql-0.90.0.1/hbql-0.90.0.1.jar:/home/ubuntu/Downloads/protobuf-java-2.4.1.jar:
/home/ubuntu/NetBeansProjects/hbase-phoenix/build/classes
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:java.library.path=
/usr/local/jdk1.7.0_25/jre/lib/amd64:/usr/local/jdk1.7.0_25/jre/lib/i386:
/usr/java/packages/lib/i386:/lib:/usr/lib
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:os.arch=i386
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:os.version=
3.2.0-23-generic-pae
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:user.name=ubuntu
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/ubuntu
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:user.dir=
/home/ubuntu/NetBeansProjects/hbase-phoenix
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Initiating client connection,
connectString=localhost:2181 sessionTimeout=180000 watcher=hconnection
13/08/22 09:14:14 INFO zookeeper.RecoverableZooKeeper:
The identifier of this process is 4944#ubuntu
13/08/22 09:14:14 INFO zookeeper.ClientCnxn: Opening socket connection to server
localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
13/08/22 09:14:14 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected
error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:708)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport
(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
13/08/22 09:14:15 WARN zookeeper.RecoverableZooKeeper:
Possibly transient
ZooKeeper exception:org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode =
ConnectionLoss for /hbase/hbaseid
13/08/22 09:14:15 INFO util.RetryCounter: Sleeping 2000ms before retry #1...
I cannot understand what is the problem.
I install hbase 0.94.10 and zookeeper 3.4.5 individually and i am not sure configuration is true.can you guide and help me how configure them correctly
did you ensured to copy the phoenix server jar (its called only phoenix-.jar, I think it should be phoenix-2.0.0.jar in your case) to all of your region servers?
Also ensure that the location of the phoenix jar is appended to the HBase classpath. You need to put the following maybe in the hbase-env.sh of all your region servers:
HBASE_CLASSPATH=$HBASE_CLASSPATH:/path/to/phoenix-2.0.0.jar
Afterwards you need to restart the cluster. Then phoenix will work.
You can also read this installation guide of their github project page.
UPDATE:
I just saw that they updated their documentation. Thw last version of the documentation was more straigh-forward, but I think you will manage...
Adding answer for anyone still looking:
Your jdbc connection string must look like:
jdbc:phoenix:zookeeper_quorum:2181:/hbase_znode
OR;
jdbc:phoenix:zookeeper_quorum:/hbase_znode
(By default zookeeper listens at port 2181.)
zookeeper_quorum - Can be comma-separated server-names(must be fully qualified DNS names)
hbase_znode - hbase or hbase-unsecured
e.g.
jdbc:phoenix:server1.abc.com,server2.abc.com:2181:/hbase

Running MapReduce on HBase gives Zookeeper error

I am doing a test project with Hadoop and HBase. Currently the cluster has 2 Ubuntu VMs hosted on a Windows machine.
I am able to perform PUT, QUERY and DELETE operation remotly (in my host machine) using following HBase Java API configuration
config = HBaseConfiguration.create();
config.set("hbase.zookeeper.quorum", "192.168.56.90");
config.set("hbase.zookeeper.property.clientPort", "2222");
When I am trying to run a HBase MapReduce job on Windows with the same config as above, I am getting following error
13/03/24 06:11:03 ERROR security.UserGroupInformation: PriviledgedActionException as:Joel cause:java.io.IOException: Failed to set permissions of path: \tmp\hadoop-Joel\mapred\staging\Joel290889388\.staging to 0700
java.io.IOException: Failed to set permissions of path: \tmp\hadoop-Joel\mapred\staging\Joel290889388\.staging to 0700
From what I have read on the web, there seems to be a problem with running MapReduce jobs on Windows. So I tried running the MapReduce job on Linux by using "java - jar MR.jar".
On Linux, I can't connect to Zookeeper. For unknown reason, Zookeeper host and port are getting reseted on the client side
13/03/24 05:59:33 INFO zookeeper.ZooKeeper: Client environment:os.version=3.5.0-23-generic
13/03/24 05:59:33 INFO zookeeper.ZooKeeper: Client environment:user.name=hduser
13/03/24 05:59:33 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hduser
13/03/24 05:59:33 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hduser/testes
13/03/24 05:59:33 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=192.168.56.90:2222 sessionTimeout=180000 watcher=hconnection
13/03/24 05:59:33 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 11552#node01
13/03/24 05:59:33 INFO zookeeper.ClientCnxn: Opening socket connection to server node01/192.168.56.90:2222. Will not attempt to authenticate using SASL (unknown error)
13/03/24 05:59:33 INFO zookeeper.ClientCnxn: Socket connection established to node01/192.168.56.90:2222, initiating session
13/03/24 05:59:33 INFO zookeeper.ClientCnxn: Session establishment complete on server node01/192.168.56.90:2222, sessionid = 0x13d9afaa1a30006, negotiated timeout = 180000
13/03/24 05:59:33 INFO client.HConnectionManager$HConnectionImplementation: Closed zookeeper sessionid=0x13d9afaa1a30006
13/03/24 05:59:33 INFO zookeeper.ZooKeeper: Session: 0x13d9afaa1a30006 closed
13/03/24 05:59:33 INFO zookeeper.ClientCnxn: EventThread shut down
13/03/24 05:59:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
13/03/24 05:59:33 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/03/24 05:59:33 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=180000 watcher=hconnection
13/03/24 05:59:33 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 11552#node01
13/03/24 05:59:33 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
13/03/24 05:59:33 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:692)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
Judging from log above, it connects correctly to node01:2222 (node01 resolves to 192.168.56.90). But for some reason, it changes to localhost:2181 and it then gives a connection refused error.
How can I fix this issue to get a MR jobs running on Linux, on the same machine as Zookeeper is running?
Version: Hbase 0.94.5 / Hadoop 1.1.2
Thanks.
You may need to set the hbase.master also.
also check the /etc/hosts file and see if it is correct. Are you able to telnet to the zookeeper using that connection info?
config.set("hbase.zookeeper.quorum", "192.168.56.90");
config.set("hbase.zookeeper.property.clientPort", "2222");
config.set("hbase.master", "some.host.com:60000")

Resources