I am trying to write data from Talend to Hortonworks sandbox Hadoop setup.
While excuting the talend job to send the data, it generates the following exception and warning in console. However, Talend job creates the file on Hortonworks environment but not able to write data there.
[statistics] connecting to socket on port 3740 [statistics] connected
[WARN ]: org.apache.hadoop.util.NativeCodeLoader - Unable to load
native-hadoop library for your platform... using builtin-java classes
where applicable Exception in component tHDFSOutput_1
java.io.IOException: DataStreamer Exception: at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:697)
Caused by: java.nio.channels.UnresolvedAddressException at
sun.nio.ch.Net.checkAddress(Unknown Source) at
sun.nio.ch.SocketChannelImpl.connect(Unknown Source) at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) at
org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1611)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1409)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1362)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:589)
[WARN ]: org.apache.hadoop.hdfs.DFSClient - DataStreamer Exception
java.nio.channels.UnresolvedAddressException at
sun.nio.ch.Net.checkAddress(Unknown Source) at
sun.nio.ch.SocketChannelImpl.connect(Unknown Source) at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) at
org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1611)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1409)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1362)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:589)
[statistics] disconnected [ERROR]: org.apache.hadoop.hdfs.DFSClient -
Failed to close inode 17106 java.io.IOException: DataStreamer
Exception: at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:697)
Caused by: java.nio.channels.UnresolvedAddressException at
sun.nio.ch.Net.checkAddress(Unknown Source) at
sun.nio.ch.SocketChannelImpl.connect(Unknown Source) at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) at
org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1611)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1409)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1362)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:589)
Job test ended at 16:50 18/03/2017. [exit code=1]
Related
I am getting ConnectionRefused error when I am running pig in mapreduce mode.
Details:
I have installed Pig from tarball( pig-0.14), and exported the classpath in bashrc.
I have all the Hadoop (hadoop-2.5) daemons up and running (confirmed by JPS).
[root#localhost sbin]# jps
2272 Jps
2130 DataNode
2022 NameNode
2073 SecondaryNameNode
2238 NodeManager
2190 ResourceManager
I am running pig in mapreduce mode:
[root#localhost sbin]# pig
grunt> file = LOAD '/input/pig_input.csv' USING PigStorage(',') AS (col1,col2,col3);
grunt> dump file;
And then I am getting the error:
java.io.IOException: java.net.ConnectException: Call From localhost.localdomain/127.0.0.1 to 0.0.0.0:10020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:334)
at org.apache.hadoop.mapred.ClientServiceDelegate.getJobStatus(ClientServiceDelegate.java:419)
at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:532)
at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:183)
at org.apache.pig.backend.hadoop.executionengine.shims.HadoopShims.getTaskReports(HadoopShims.java:231)
at org.apache.pig.tools.pigstats.mapreduce.MRJobStats.addMapReduceStatistics(MRJobStats.java:352)
at org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil.addSuccessJobStats(MRPigStatsUtil.java:233)
at org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil.accumulateStats(MRPigStatsUtil.java:165)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:360)
at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:280)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1390)
at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1375)
at org.apache.pig.PigServer.storeEx(PigServer.java:1034)
at org.apache.pig.PigServer.store(PigServer.java:997)
at org.apache.pig.PigServer.openIterator(PigServer.java:910)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:746)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:372)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:230)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:66)
at org.apache.pig.Main.run(Main.java:558)
at org.apache.pig.Main.main(Main.java:170)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.net.ConnectException: Call From localhost.localdomain/127.0.0.1 to 0.0.0.0:10020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1415)
at org.apache.hadoop.ipc.Client.call(Client.java:1364)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy15.getJobReport(Unknown Source)
at org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.getJobReport(MRClientProtocolPBClientImpl.java:133)
at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:320)
... 26 more
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:712)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:606)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:700)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463)
at org.apache.hadoop.ipc.Client.call(Client.java:1382)
... 34 more
I haven't configured any config file in pig tarball.Do I need to make any changes in the pig configuration to make it identify the hadoop installation?
In bashrc the HADOOP_HOME is set properly.
Please help.
To start it,go to hadoop's sbin dir and then type the command
mr-jobhistory-server.sh start historyserver --config $HADOOP_CONF_DIR
$HADOOP_CONF_DIR is the directory where hadoop's config files like hdfs-site.xml etc. reside.
Found the solution.
The line:
Caused by: java.net.ConnectException: Call From localhost.localdomain/127.0.0.1 to 0.0.0.0:10020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
comes when JobHistoryServer is not up.
So starting the JobHistoryServer would eliminate this issue.
To start it, go to hadoop's sbin dir and then the command:
mr-jobhistory-server.sh start
Do a jps to check if JobHistoryServer is up, and then re-execute your Pig commands.
Updated command(as of 2017):
./mr-jobhistory-daemon.sh --config $HADOOP_CONF_DIR start historyserver
check
jps
I am using hiveserver2 for connection but after running few jobs in hive i am getting below error in hive log.
2014-12-30 05:55:54,594 ERROR server.TThreadPoolServer (TThreadPoolServer.java:run(234)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:209)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
And after this i am not able to connect to hiveserver2 and after restarting hiveserver2 service i am able to connect.
What is the root cause of this issue?
Thanks
I got phoenix 4.0 running on hbase 0.98/hadoop 2.3.0 and was impressed by the command line tools.
In the second step I followed the description on the webpage to connect to phoenix using its bundled JDBC driver.
When I try to connect I get the Exception message (on Squirrel side)
java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.sql.SQLException: ERROR 103 (08004): Unable to establish connection.
at java.util.concurrent.FutureTask.report(Unknown Source)
at java.util.concurrent.FutureTask.get(Unknown Source)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.awaitConnection(OpenConnectionCommand.java:132)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.access$100(OpenConnectionCommand.java:45)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand$2.run(OpenConnectionCommand.java:115)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.RuntimeException: java.sql.SQLException: ERROR 103 (08004): Unable to establish connection.
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.executeConnect(OpenConnectionCommand.java:171)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.access$000(OpenConnectionCommand.java:45)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand$1.run(OpenConnectionCommand.java:104)
... 5 more
Caused by: java.sql.SQLException: ERROR 103 (08004): Unable to establish connection.
at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:309)
at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:133)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:254)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1446)
at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:131)
at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:112)
at net.sourceforge.squirrel_sql.fw.sql.SQLDriverManager.getConnection(SQLDriverManager.java:133)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.executeConnect(OpenConnectionCommand.java:167)
... 7 more
Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:416)
at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:309)
at org.apache.phoenix.query.HConnectionFactory$HConnectionFactoryImpl.createConnection(HConnectionFactory.java:47)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:252)
... 12 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
at java.lang.reflect.Constructor.newInstance(Unknown Source)
at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:414)
... 15 more
Caused by: java.lang.RuntimeException: Socket Factory class not found: java.lang.ClassNotFoundException: Class org.apache.hadoop.net.StandardSocketFactory not found
at org.apache.hadoop.net.NetUtils.getSocketFactoryFromProperty(NetUtils.java:142)
at org.apache.hadoop.net.NetUtils.getDefaultSocketFactory(NetUtils.java:122)
at org.apache.hadoop.hbase.ipc.RpcClient.<init>(RpcClient.java:1293)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:664)
... 20 more
I double checked the jar files with classfinder to be sure that the class org.apache.hadoop.net.StandardSocketFactory IS in the classpath.
What can I do to get Squirrel connected with Phoenix?
Update:
I saw in the zookeeper log on the server side that the network communication started:
2014-05-28 06:24:29,411 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxnFactory: Accepted socket connection from /192.168.1.106:58172
2014-05-28 06:24:29,412 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.ZooKeeperServer: Client attempting to establish new session at /192.168.1.106:58172
2014-05-28 06:24:29,518 INFO [SyncThread:0] server.ZooKeeperServer: Established session 0x146413f6c3a000c with negotiated timeout 90000 for client /192.168.1.106:58172
I solved the problem replacing the downloaded binary version 4.0 of phoenix with the snapshot version 4.1 which I built by myown from the source version cloned via git from
http://git.apache.org/incubator-phoenix.git/
After the successful build I extracted the tarball from the assembly subdirectory and copied the following jars to hbase 0.98's lib dir
phoenix-core-4.1.0-incubating-SNAPSHOT.jar
phoenix-flume-4.1.0-incubating-SNAPSHOT.jar
phoenix-pig-4.1.0-incubating-SNAPSHOT.jar
In Squirrel I used just phoenix-4.1.0-incubating-SNAPSHOT-client.jar as a extra path to get the driver running.
I follow the instruction of this page to install single machine yarn cluster http://hadoop.apache.org/docs/r2.0.5-alpha/hadoop-project-dist/hadoop-common/SingleCluster.html
But when I run the example jar, the job hang there and I check the log of resource manager, find the following error (the first is client side log, the second is resource manager log )
(Client side)
13/10/18 17:30:36 ERROR security.UserGroupInformation: PriviledgedActionException as:zhangj82 (auth:SIMPLE) cause:java.io.IOException
java.io.IOException
at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:326)
at org.apache.hadoop.mapred.ClientServiceDelegate.getJobStatus(ClientServiceDelegate.java:385)
at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:526)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:313)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:310)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478)
at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:310)
at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:594)
at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1277)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1239)
at org.apache.hadoop.examples.RandomWriter.run(RandomWriter.java:283)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.RandomWriter.main(RandomWriter.java:294)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Resource Manager
2013-10-18 17:35:26,128 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8040: readAndProcess threw exception javax.security.sasl.SaslException: DIGEST-MD5: IO error acquiring password [Caused by org.apache.hadoop.security.token.SecretManager$InvalidToken: Can't de-serialize tokenIdentifier] from client 127.0.0.1. Count of bytes read: 0
javax.security.sasl.SaslException: DIGEST-MD5: IO error acquiring password [Caused by org.apache.hadoop.security.token.SecretManager$InvalidToken: Can't de-serialize tokenIdentifier]
at com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java:594)
at com.sun.security.sasl.digest.DigestMD5Server.evaluateResponse(DigestMD5Server.java:244)
at org.apache.hadoop.ipc.Server$Connection.saslReadAndProcess(Server.java:1173)
at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1350)
at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:726)
at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:525)
at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:500)
Caused by: org.apache.hadoop.security.token.SecretManager$InvalidToken: Can't de-serialize tokenIdentifier
at org.apache.hadoop.security.SaslRpcServer.getIdentifier(SaslRpcServer.java:112)
at org.apache.hadoop.security.SaslRpcServer$SaslDigestCallbackHandler.handle(SaslRpcServer.java:217)
at com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java:585)
... 6 more
Caused by: java.io.EOFException
at java.io.DataInputStream.readFully(DataInputStream.java:197)
at org.apache.hadoop.io.Text.readFields(Text.java:306)
at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.readFields(AbstractDelegationTokenIdentifier.java:186)
at org.apache.hadoop.security.SaslRpcServer.getIdentifier(SaslRpcServer.java:109)
... 8 more
2013-10-18 17:35:26,308 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1382088798449_0001_01_000001 Container Transitioned from ACQUIRED to RUNNING
This bug been raised in Hadoop issues. Basically to overcome this you may apply source level patch as described in BlockTokenSecretManager or try to update to latest version of Hadoop
if i try to start the hbase, hmaster is not running and getting below
error.From google i tried it is because of classpath mismatch so i
copied hadoop jar into hbase/lib,But still i am getiing below error.
FATAL master.HMaster: Unhandled exception. Starting shutdown.
org.apache.hadoop.ipc.RemoteException: Server IPC version 7 cannot communicate with client version 4
at org.apache.hadoop.ipc.Client.call(Client.java:1070)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy10.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:561)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:94)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:482)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:344)
at java.lang.Thread.run(Thread.java:722)
If it is a Classpath error try to add $HBASE_HOME/conf/ and $HBASE_HOME/lib/ to your $HADOOP_CLASSPATH. That latter one can be set in $HADOOP_HOME/conf/hadoop-env.sh. For me it works that way.