I just referred following blog for running simple bulk load but its giving some KosmosFileSystem FileSystem exception.
http://www.thecloudavenue.com/2013/04/bulk-loading-data-in-hbase.html
Here is a log that got generated after running following command.
hadoop jar HbaseBulkImport.jar /user/hduser/hbase/input/RowFeeder.csv /user/hduser/hbase/ouput/ NBAFinal2010
15/04/16 12:40:35 INFO zookeeper.ZooKeeper: Client environment:java.library.path=:/usr/hdp/2.2.0.0-2041/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.2.0.0-2041/hadoop/lib/native
15/04/16 12:40:35 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
15/04/16 12:40:35 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
15/04/16 12:40:35 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
15/04/16 12:40:35 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
15/04/16 12:40:35 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-504.12.2.el6.x86_64
15/04/16 12:40:35 INFO zookeeper.ZooKeeper: Client environment:user.name=hduser
15/04/16 12:40:35 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hduser
15/04/16 12:40:35 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hduser/user/shashi/hbase/bulkLoad
15/04/16 12:40:35 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x2dd06f21, quorum=localhost:2181, baseZNode=/hbase-unsecure
15/04/16 12:40:35 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2dd06f21 connecting to ZooKeeper ensemble=localhost:2181
15/04/16 12:40:35 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
15/04/16 12:40:35 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
15/04/16 12:40:35 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x34cb73f98050030, negotiated timeout = 40000
Exception in thread "main" java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:426)
at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:403)
at org.apache.hadoop.hbase.client.ConnectionManager.getConnectionInternal(ConnectionManager.java:281)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:207)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:169)
at Driver.main(Driver.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:424)
... 11 more
Caused by: java.lang.ExceptionInInitializerError
at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75)
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:106)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:858)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:662)
... 16 more
Caused by: java.lang.UnsupportedOperationException: Not implemented by the KosmosFileSystem FileSystem implementation
at org.apache.hadoop.fs.FileSystem.getScheme(FileSystem.java:216)
at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2564)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2574)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:354)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.hbase.util.DynamicClassLoader.<init>(DynamicClassLoader.java:104)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.<clinit>(ProtobufUtil.java:226)
... 21 more
Looks like it was jar related issue. After using latest jars from hbase lib issue got resolved.
Related
I followed this tutorial and I get stuck when I tried to inject urls to nutch from hadoop. I configured nutch files like this tutorial by copying hadoop conf files to nutch conf directory. When I tried to run ant runtime with configured files according to the first tutorial, it did not work.
ubuntu#ip-172-31-35-238:~/apache-nutch-2.2.1/runtime/deploy$ bin/nutch inject urls
Warning: $HADOOP_HOME is deprecated.
15/07/27 12:01:07 INFO crawl.InjectorJob: InjectorJob: starting at 2015-07-27 12:01:07
15/07/27 12:01:07 INFO crawl.InjectorJob: InjectorJob: Injecting urlDir: urls
15/07/27 12:01:10 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.3.2-1031432, built on 11/05/2010 05:32 GMT
15/07/27 12:01:10 INFO zookeeper.ZooKeeper: Client environment:host.name=master
15/07/27 12:01:10 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_45
15/07/27 12:01:10 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
15/07/27 12:01:10 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-6-oracle/jre
15/07/27 12:01:10 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/home/ubuntu/hadoop-1.2.1/libexec/../conf:/usr/lib/jvm/java-6-oracle/lib/tools.jar:/home/ubuntu/hadoop-1.2.1/libexec/..:/home/ubuntu/hadoop-1.2.1/libexec/../hadoop-core-1.2.1.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/asm-3.2.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/aspectjrt-1.6.11.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/aspectjtools-1.6.11.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/commons-beanutils-1.7.0.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/commons-cli-1.2.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/commons-codec-1.4.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/commons-collections-3.2.1.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/commons-configuration-1.6.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/commons-daemon-1.0.1.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/commons-digester-1.8.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/commons-el-1.0.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/commons-httpclient-3.0.1.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/commons-io-2.1.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/commons-lang-2.4.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/commons-logging-1.1.1.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/commons-logging-api-1.0.4.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/commons-math-2.1.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/commons-net-3.1.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/core-3.1.1.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/hadoop-capacity-scheduler-1.2.1.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/hadoop-fairscheduler-1.2.1.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/hadoop-thriftfs-1.2.1.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/hsqldb-1.8.0.10.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/jasper-compiler-5.5.12.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/jasper-runtime-5.5.12.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/jdeb-0.8.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/jersey-core-1.8.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/jersey-json-1.8.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/jersey-server-1.8.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/jets3t-0.6.1.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/jetty-6.1.26.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/jetty-util-6.1.26.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/jsch-0.1.42.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/junit-4.5.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/kfs-0.2.2.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/log4j-1.2.15.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/mockito-all-1.8.5.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/oro-2.0.8.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/servlet-api-2.5-20081211.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/slf4j-api-1.4.3.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/xmlenc-0.52.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/ubuntu/hadoop-1.2.1/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
15/07/27 12:01:10 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/home/ubuntu/hadoop-1.2.1/libexec/../lib/native/Linux-amd64-64
15/07/27 12:01:10 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
15/07/27 12:01:10 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
15/07/27 12:01:10 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
15/07/27 12:01:10 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
15/07/27 12:01:10 INFO zookeeper.ZooKeeper: Client environment:os.version=3.2.0-75-virtual
15/07/27 12:01:10 INFO zookeeper.ZooKeeper: Client environment:user.name=ubuntu
15/07/27 12:01:10 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/ubuntu
15/07/27 12:01:10 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/ubuntu/apache-nutch-2.2.1/runtime/deploy
15/07/27 12:01:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=180000 watcher=hconnection
15/07/27 12:01:10 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181
15/07/27 12:01:10 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
15/07/27 12:01:10 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x14ecf53dd5f0007, negotiated timeout = 180000
Can somebody help me?
for connect to hbase i write this code:
Class.forName("com.salesforce.phoenix.jdbc.PhoenixDriver");
conn = DriverManager.getConnection("jdbc:phoenix:localhost:2181");
but after running give me this errors:
13/08/22 09:14:14 INFO zookeeper.ZooKeeper:
Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:host.name=ubuntu
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_25
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:
java.vendor=Oracle Corporation
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:
java.home=/usr/local/jdk1.7.0_25/jre
13/08/22 09:14:14 INFO zookeeper.ZooKeeper:
Client environment:java.class.path=/home/ubuntu/Phonix/phoenix-2.0.0-client.jar:
/home/ubuntu/Downloads/hbql-0.90.0.1/hbql-0.90.0.1-src.jar:/home/ubuntu/Downloads/
hbql-0.90.0.1/hbql-0.90.0.1.jar:/home/ubuntu/Downloads/protobuf-java-2.4.1.jar:
/home/ubuntu/NetBeansProjects/hbase-phoenix/build/classes
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:java.library.path=
/usr/local/jdk1.7.0_25/jre/lib/amd64:/usr/local/jdk1.7.0_25/jre/lib/i386:
/usr/java/packages/lib/i386:/lib:/usr/lib
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:os.arch=i386
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:os.version=
3.2.0-23-generic-pae
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:user.name=ubuntu
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/ubuntu
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Client environment:user.dir=
/home/ubuntu/NetBeansProjects/hbase-phoenix
13/08/22 09:14:14 INFO zookeeper.ZooKeeper: Initiating client connection,
connectString=localhost:2181 sessionTimeout=180000 watcher=hconnection
13/08/22 09:14:14 INFO zookeeper.RecoverableZooKeeper:
The identifier of this process is 4944#ubuntu
13/08/22 09:14:14 INFO zookeeper.ClientCnxn: Opening socket connection to server
localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
13/08/22 09:14:14 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected
error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:708)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport
(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
13/08/22 09:14:15 WARN zookeeper.RecoverableZooKeeper:
Possibly transient
ZooKeeper exception:org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode =
ConnectionLoss for /hbase/hbaseid
13/08/22 09:14:15 INFO util.RetryCounter: Sleeping 2000ms before retry #1...
I cannot understand what is the problem.
I install hbase 0.94.10 and zookeeper 3.4.5 individually and i am not sure configuration is true.can you guide and help me how configure them correctly
did you ensured to copy the phoenix server jar (its called only phoenix-.jar, I think it should be phoenix-2.0.0.jar in your case) to all of your region servers?
Also ensure that the location of the phoenix jar is appended to the HBase classpath. You need to put the following maybe in the hbase-env.sh of all your region servers:
HBASE_CLASSPATH=$HBASE_CLASSPATH:/path/to/phoenix-2.0.0.jar
Afterwards you need to restart the cluster. Then phoenix will work.
You can also read this installation guide of their github project page.
UPDATE:
I just saw that they updated their documentation. Thw last version of the documentation was more straigh-forward, but I think you will manage...
Adding answer for anyone still looking:
Your jdbc connection string must look like:
jdbc:phoenix:zookeeper_quorum:2181:/hbase_znode
OR;
jdbc:phoenix:zookeeper_quorum:/hbase_znode
(By default zookeeper listens at port 2181.)
zookeeper_quorum - Can be comma-separated server-names(must be fully qualified DNS names)
hbase_znode - hbase or hbase-unsecured
e.g.
jdbc:phoenix:server1.abc.com,server2.abc.com:2181:/hbase
I'm trying to connect to HBase installed in the local system (using Hortonworks 1.1.1.16), by a small program in Java, which executes the next command:
HBaseAdmin.checkHBaseAvailable(conf);
It is worth saying that there is no problem at all when connecting to HBase from the command line using the hbase command.
The content of the host file is the next one (where example.com contains the actual host name):
127.0.0.1 localhost example.com
HBase is configured to work in standalone mode:
hbase.cluster.distributed=false
When executing the program, the next exception is thrown:
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:host.name=localhost
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_19
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.19.x86_64/jre
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:java.class.path=[...]
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-358.2.1.el6.x86_64
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:user.name=root
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:user.dir=/root/git/project
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=example.com:2181 sessionTimeout=60000 watcher=hconnection-0x678e4593
13/05/13 15:18:29 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is hconnection-0x678e4593
13/05/13 15:18:29 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
13/05/13 15:18:29 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
13/05/13 15:18:29 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x13e9d6851af0046, negotiated timeout = 40000
13/05/13 15:18:29 INFO client.HConnectionManager$HConnectionImplementation: ClusterId is cccadf06-f6bf-492e-8a39-e8beac521ce6
13/05/13 15:18:29 INFO client.HConnectionManager$HConnectionImplementation: getMaster attempt 1 of 1 failed; no more retrying.
com.google.protobuf.ServiceException: java.io.IOException: Broken pipe
at org.apache.hadoop.hbase.ipc.ProtobufRpcClientEngine$Invoker.invoke(ProtobufRpcClientEngine.java:149)
at com.sun.proxy.$Proxy5.isMasterRunning(Unknown Source)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.createMasterInterface(HConnectionManager.java:732)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.createMasterWithRetries(HConnectionManager.java:764)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterProtocol(HConnectionManager.java:1724)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterMonitor(HConnectionManager.java:1757)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.isMasterRunning(HConnectionManager.java:837)
at org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:2010)
at TestHBase.main(TestHBase.java:37)
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:94)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:450)
at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.writeConnectionHeader(HBaseClient.java:896)
at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:847)
at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1414)
at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1299)
at org.apache.hadoop.hbase.ipc.ProtobufRpcClientEngine$Invoker.invoke(ProtobufRpcClientEngine.java:131)
... 8 more
13/05/13 15:18:29 INFO client.HConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x13e9d6851af0046
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Session: 0x13e9d6851af0046 closed
13/05/13 15:18:29 INFO zookeeper.ClientCnxn: EventThread shut down
org.apache.hadoop.hbase.exceptions.MasterNotRunningException: com.google.protobuf.ServiceException: java.io.IOException: Broken pipe
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.createMasterWithRetries(HConnectionManager.java:793)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterProtocol(HConnectionManager.java:1724)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterMonitor(HConnectionManager.java:1757)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.isMasterRunning(HConnectionManager.java:837)
at org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:2010)
at TestHBase.main(TestHBase.java:37)
Caused by: com.google.protobuf.ServiceException: java.io.IOException: Broken pipe
at org.apache.hadoop.hbase.ipc.ProtobufRpcClientEngine$Invoker.invoke(ProtobufRpcClientEngine.java:149)
at com.sun.proxy.$Proxy5.isMasterRunning(Unknown Source)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.createMasterInterface(HConnectionManager.java:732)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.createMasterWithRetries(HConnectionManager.java:764)
... 5 more
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:94)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:450)
at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.writeConnectionHeader(HBaseClient.java:896)
at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:847)
at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1414)
at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1299)
at org.apache.hadoop.hbase.ipc.ProtobufRpcClientEngine$Invoker.invoke(ProtobufRpcClientEngine.java:131)
... 8 more
This trace provides some evidence of what may be actually happening. It seems that the connection to ZooKeeper is established, but something fails when tries to access the master.
Though I've spent hours trying to find a solution in Google, I haven't seen such an exception. Particularly, this exception varies from two things from most found elsewhere:
Everybody seems to have the error getMaster attempt 0 of 1 failed rather than getMaster attempt 1 of 1 failed. I don't know whether this makes a point at all, but I find it somehow weird.
I can't find other people getting the Broken pipe error.
By the way, the master is actually running, as far as I can see in the Hortonworks Management Console.
When looking at the most recent logs, this is the output:
2013-05-13 15:30:07,192 WARN org.apache.hadoop.ipc.HBaseServer: Incorrect header or version mismatch from 127.0.0.1:40788 got version 0 expected version 3
As it is a warning rather than an error, I don't know whether it has something to do with the actual problem. The port varies in each execution.
Finally we found the problem and solved it. It turned out to be a dependencies problem. We were using hbase-0.95.0 and hbase-client-0.95.0. Using hbase-0.94.7 or hbase-0.94.9 seemed to work.
Yet, some problems occurred under certain circumstances even with that versions of the HBase library. Particularly, some problems arised when running it within an application server (JBoss AS7). In the end, all problems seems to be solved by removing the dependency hbase-client-0.95.0, and replacing it by haboop-core-1.1.2 as some classes not contained in the hbase libraries were required.
Regards.
I'd recommend first check your HBase Master / RegionServer ports are really bound with netstat -n -a. I had situation when HBase Master IPC was bound to only external IP (this was Cloudera CDH) and it was not reachable through 127.0.0.1. It looks like most probable case for you - hbase shell should still work in this case.
Another possible reason could be previous cluster crash with some HDFS data corrupted. In this case HBase does not actually start waiting for HDFS to exit safe mode. But this looks like not your case. If it is, you can manually force HDFS to exit safe mode from console and then do fsck for Hadoop and similar procedure for HBase.
I am doing a test project with Hadoop and HBase. Currently the cluster has 2 Ubuntu VMs hosted on a Windows machine.
I am able to perform PUT, QUERY and DELETE operation remotly (in my host machine) using following HBase Java API configuration
config = HBaseConfiguration.create();
config.set("hbase.zookeeper.quorum", "192.168.56.90");
config.set("hbase.zookeeper.property.clientPort", "2222");
When I am trying to run a HBase MapReduce job on Windows with the same config as above, I am getting following error
13/03/24 06:11:03 ERROR security.UserGroupInformation: PriviledgedActionException as:Joel cause:java.io.IOException: Failed to set permissions of path: \tmp\hadoop-Joel\mapred\staging\Joel290889388\.staging to 0700
java.io.IOException: Failed to set permissions of path: \tmp\hadoop-Joel\mapred\staging\Joel290889388\.staging to 0700
From what I have read on the web, there seems to be a problem with running MapReduce jobs on Windows. So I tried running the MapReduce job on Linux by using "java - jar MR.jar".
On Linux, I can't connect to Zookeeper. For unknown reason, Zookeeper host and port are getting reseted on the client side
13/03/24 05:59:33 INFO zookeeper.ZooKeeper: Client environment:os.version=3.5.0-23-generic
13/03/24 05:59:33 INFO zookeeper.ZooKeeper: Client environment:user.name=hduser
13/03/24 05:59:33 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hduser
13/03/24 05:59:33 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hduser/testes
13/03/24 05:59:33 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=192.168.56.90:2222 sessionTimeout=180000 watcher=hconnection
13/03/24 05:59:33 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 11552#node01
13/03/24 05:59:33 INFO zookeeper.ClientCnxn: Opening socket connection to server node01/192.168.56.90:2222. Will not attempt to authenticate using SASL (unknown error)
13/03/24 05:59:33 INFO zookeeper.ClientCnxn: Socket connection established to node01/192.168.56.90:2222, initiating session
13/03/24 05:59:33 INFO zookeeper.ClientCnxn: Session establishment complete on server node01/192.168.56.90:2222, sessionid = 0x13d9afaa1a30006, negotiated timeout = 180000
13/03/24 05:59:33 INFO client.HConnectionManager$HConnectionImplementation: Closed zookeeper sessionid=0x13d9afaa1a30006
13/03/24 05:59:33 INFO zookeeper.ZooKeeper: Session: 0x13d9afaa1a30006 closed
13/03/24 05:59:33 INFO zookeeper.ClientCnxn: EventThread shut down
13/03/24 05:59:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
13/03/24 05:59:33 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/03/24 05:59:33 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=180000 watcher=hconnection
13/03/24 05:59:33 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 11552#node01
13/03/24 05:59:33 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
13/03/24 05:59:33 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:692)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
Judging from log above, it connects correctly to node01:2222 (node01 resolves to 192.168.56.90). But for some reason, it changes to localhost:2181 and it then gives a connection refused error.
How can I fix this issue to get a MR jobs running on Linux, on the same machine as Zookeeper is running?
Version: Hbase 0.94.5 / Hadoop 1.1.2
Thanks.
You may need to set the hbase.master also.
also check the /etc/hosts file and see if it is correct. Are you able to telnet to the zookeeper using that connection info?
config.set("hbase.zookeeper.quorum", "192.168.56.90");
config.set("hbase.zookeeper.property.clientPort", "2222");
config.set("hbase.master", "some.host.com:60000")
Do anyone know why this error appears?... The code was working before but now it is not working in my system but works in other systems successfully. The error log is as below... Please help me...
I think there may have any version incompatibilities... But I don't know what exactly is?...
12/10/18 14:21:59 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.3.1-942149, built on 05/07/2010 17:14 GMT
12/10/18 14:21:59 INFO zookeeper.ZooKeeper: Client environment:host.name=User-PC
12/10/18 14:21:59 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0
12/10/18 14:21:59 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
12/10/18 14:21:59 INFO zookeeper.ZooKeeper: Client environment:java.home=C:\Program Files\Java\jdk1.7.0\jre
12/10/18 14:21:59 INFO zookeeper.ZooKeeper: Client environment:java.class.path=D:\apache-tomcat-7.0.30\bin\bootstrap.jar;D:\apache-tomcat-7.0.30\bin\tomcat-juli.jar
12/10/18 14:21:59 INFO zookeeper.ZooKeeper: Client environment:java.library.path=C:\Program Files\Java\jdk1.7.0\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\TortoiseSVN\bin;D:\apache-maven-3.0.4\bin;.
12/10/18 14:21:59 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=D:\apache-tomcat-7.0.30\temp
12/10/18 14:21:59 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
12/10/18 14:21:59 INFO zookeeper.ZooKeeper: Client environment:os.name=Windows 7
12/10/18 14:21:59 INFO zookeeper.ZooKeeper: Client environment:os.arch=x86
12/10/18 14:21:59 INFO zookeeper.ZooKeeper: Client environment:os.version=6.1
12/10/18 14:21:59 INFO zookeeper.ZooKeeper: Client environment:user.name=User
12/10/18 14:21:59 INFO zookeeper.ZooKeeper: Client environment:user.home=C:\Users\User
12/10/18 14:21:59 INFO zookeeper.ZooKeeper: Client environment:user.dir=D:\apache-tomcat-7.0.30\bin
12/10/18 14:21:59 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=192.168.1.240:2222 sessionTimeout=180000 watcher=hconnection
12/10/18 14:21:59 INFO zookeeper.ClientCnxn: Opening socket connection to server /192.168.1.240:2222
12/10/18 14:22:00 INFO zookeeper.ClientCnxn: Socket connection established to JAI-3/192.168.1.240:2222, initiating session
12/10/18 14:22:00 INFO zookeeper.ClientCnxn: Session establishment complete on server JAI-3/192.168.1.240:2222, sessionid = 0x13a6d1375aa0244, negotiated timeout = 40000
12/10/18 14:22:02 ERROR hbase.HServerAddress: Could not resolve the DNS name of slave1
java.lang.IllegalArgumentException: hostname can't be null
Error while establishing connection to HBASE
at java.net.InetSocketAddress.<init>(InetSocketAddress.java:139)
at org.apache.hadoop.hbase.HServerAddress.getResolvedAddress(HServerAddress.java:108)
at org.apache.hadoop.hbase.HServerAddress.<init>(HServerAddress.java:64)
at org.apache.hadoop.hbase.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:63)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:354)
at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:94)
at com.wlu.orm.hbase.connection.HBaseConnection.<init>(HBaseConnection.java:28)
at com.project.common.HBaseConnectionWrapper.<init>(HBaseConnectionWrapper.java:31)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:147)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:110)
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:280)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1035)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:939)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:485)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:585)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:913)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:464)
at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:384)
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:283)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:111)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4791)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5285)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:618)
at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:650)
at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1582)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)