Configuring hadoop-2.0.2-alpha with hbase-0.94.2 - hadoop

if i try to start the hbase, hmaster is not running and getting below
error.From google i tried it is because of classpath mismatch so i
copied hadoop jar into hbase/lib,But still i am getiing below error.
FATAL master.HMaster: Unhandled exception. Starting shutdown.
org.apache.hadoop.ipc.RemoteException: Server IPC version 7 cannot communicate with client version 4
at org.apache.hadoop.ipc.Client.call(Client.java:1070)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy10.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:561)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:94)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:482)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:344)
at java.lang.Thread.run(Thread.java:722)

If it is a Classpath error try to add $HBASE_HOME/conf/ and $HBASE_HOME/lib/ to your $HADOOP_CLASSPATH. That latter one can be set in $HADOOP_HOME/conf/hadoop-env.sh. For me it works that way.

Related

Talend Bigdata & Hortonworks sandbox

I am trying to write data from Talend to Hortonworks sandbox Hadoop setup.
While excuting the talend job to send the data, it generates the following exception and warning in console. However, Talend job creates the file on Hortonworks environment but not able to write data there.
[statistics] connecting to socket on port 3740 [statistics] connected
[WARN ]: org.apache.hadoop.util.NativeCodeLoader - Unable to load
native-hadoop library for your platform... using builtin-java classes
where applicable Exception in component tHDFSOutput_1
java.io.IOException: DataStreamer Exception: at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:697)
Caused by: java.nio.channels.UnresolvedAddressException at
sun.nio.ch.Net.checkAddress(Unknown Source) at
sun.nio.ch.SocketChannelImpl.connect(Unknown Source) at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) at
org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1611)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1409)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1362)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:589)
[WARN ]: org.apache.hadoop.hdfs.DFSClient - DataStreamer Exception
java.nio.channels.UnresolvedAddressException at
sun.nio.ch.Net.checkAddress(Unknown Source) at
sun.nio.ch.SocketChannelImpl.connect(Unknown Source) at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) at
org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1611)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1409)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1362)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:589)
[statistics] disconnected [ERROR]: org.apache.hadoop.hdfs.DFSClient -
Failed to close inode 17106 java.io.IOException: DataStreamer
Exception: at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:697)
Caused by: java.nio.channels.UnresolvedAddressException at
sun.nio.ch.Net.checkAddress(Unknown Source) at
sun.nio.ch.SocketChannelImpl.connect(Unknown Source) at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) at
org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1611)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1409)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1362)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:589)
Job test ended at 16:50 18/03/2017. [exit code=1]

When spark-shell launch, It has RuntimeException for SimpleUserGroupsMapping

I install HDFS, YARN through Ambari and try to deploy spark on yarn.
But When I execute follow script, Spark has error
How to deploy spark on yarn.
Would you mind explaining how to deploy spark on yarn step by step?
I set HADOOP_CONF_DIR, YARN_CONF_DIR in spark-env.sh and spark.master in spark-defaults.conf.
execute script
./bin/spark-shell --master yarn-client
Error
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.security.SimpleUserGroupsMapping not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2106)
at org.apache.hadoop.security.Groups.<init>(Groups.java:70)
at org.apache.hadoop.security.Groups.<init>(Groups.java:66)
at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:271)
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:248)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:763)
at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:748)
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:621)
at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2136)
at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2136)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2136)
at org.apache.spark.SecurityManager.<init>(SecurityManager.scala:214)
at org.apache.spark.repl.SparkIMain.<init>(SparkIMain.scala:118)
at org.apache.spark.repl.SparkILoop$SparkILoopInterpreter.<init>(SparkILoop.scala:187)
at org.apache.spark.repl.SparkILoop.createInterpreter(SparkILoop.scala:217)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:949)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.security.SimpleUserGroupsMapping not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2074)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2098)
... 33 more
Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.security.SimpleUserGroupsMapping not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1980)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2072)
... 34 more
16/02/19 22:07:20 INFO util.ShutdownHookManager: Shutdown hook called
16/02/19 22:07:20 INFO util.ShutdownHookManager: Deleting directory
Check if the class is present in your hadoop classpath.
find $HADOOP_HOME/* -name *.jar -print |xargs grep "org.apache.hadoop.security.SimpleUserGroupsMapping" -0
if present then check if the class is present in spark distribution
grep "org.apache.hadoop.security.SimpleUserGroupsMapping" $SPARK_HOME/lib/*
If the jar is present in hadoop distribution try copy it to $SPARK_HOME/lib/.
If none of the above works try changing
hadoop.security.group.mapping org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback
in core-site.xml and restart hadoop and spark.

HDFS IO error org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4 i

I am using Flume 1.6.0 in a virtual machine and Hadoop 2.7.1 in another virtual machine .
When I send Avro Events to the Flume 1.6.0 and it try to write on Hadoop 2.7.1 HDFS System. The follwing exception occurs
(SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:455)] HDFS IO error
org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4
at org.apache.hadoop.ipc.Client.call(Client.java:1113)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
at org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:100)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1446)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1464)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:263)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:243)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:235)
at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:679)
at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)
at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:676)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Thread
I try by adding these .jars in flume lib folder
=>
hadoop-common-2.7.1.jar
avro-1.7.7.jar instead of avro-1.7.4.jar
avro-ipc-1.7.7.jar instead of avro-ipc-1.7.4.jar
guava-18.0.jar instead of guava-11.0.2.jar
But the problem is still unsolved.
FlumeNG HDFS Sink depends on the following .jar files
hadoop-auth-2.4.0 jar
hadoop-common-2.4.0.jar
hadoop-hdfs-2.4.0.jar
commons-configuration-1.10.jar
which are not included in Flume lib folder .
By adding these .jars , the exception has been successfully overcome.

Squirrel access to Phoenix/HBase

I got phoenix 4.0 running on hbase 0.98/hadoop 2.3.0 and was impressed by the command line tools.
In the second step I followed the description on the webpage to connect to phoenix using its bundled JDBC driver.
When I try to connect I get the Exception message (on Squirrel side)
java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.sql.SQLException: ERROR 103 (08004): Unable to establish connection.
at java.util.concurrent.FutureTask.report(Unknown Source)
at java.util.concurrent.FutureTask.get(Unknown Source)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.awaitConnection(OpenConnectionCommand.java:132)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.access$100(OpenConnectionCommand.java:45)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand$2.run(OpenConnectionCommand.java:115)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.RuntimeException: java.sql.SQLException: ERROR 103 (08004): Unable to establish connection.
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.executeConnect(OpenConnectionCommand.java:171)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.access$000(OpenConnectionCommand.java:45)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand$1.run(OpenConnectionCommand.java:104)
... 5 more
Caused by: java.sql.SQLException: ERROR 103 (08004): Unable to establish connection.
at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:309)
at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:133)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:254)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1446)
at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:131)
at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:112)
at net.sourceforge.squirrel_sql.fw.sql.SQLDriverManager.getConnection(SQLDriverManager.java:133)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.executeConnect(OpenConnectionCommand.java:167)
... 7 more
Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:416)
at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:309)
at org.apache.phoenix.query.HConnectionFactory$HConnectionFactoryImpl.createConnection(HConnectionFactory.java:47)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:252)
... 12 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
at java.lang.reflect.Constructor.newInstance(Unknown Source)
at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:414)
... 15 more
Caused by: java.lang.RuntimeException: Socket Factory class not found: java.lang.ClassNotFoundException: Class org.apache.hadoop.net.StandardSocketFactory not found
at org.apache.hadoop.net.NetUtils.getSocketFactoryFromProperty(NetUtils.java:142)
at org.apache.hadoop.net.NetUtils.getDefaultSocketFactory(NetUtils.java:122)
at org.apache.hadoop.hbase.ipc.RpcClient.<init>(RpcClient.java:1293)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:664)
... 20 more
I double checked the jar files with classfinder to be sure that the class org.apache.hadoop.net.StandardSocketFactory IS in the classpath.
What can I do to get Squirrel connected with Phoenix?
Update:
I saw in the zookeeper log on the server side that the network communication started:
2014-05-28 06:24:29,411 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxnFactory: Accepted socket connection from /192.168.1.106:58172
2014-05-28 06:24:29,412 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.ZooKeeperServer: Client attempting to establish new session at /192.168.1.106:58172
2014-05-28 06:24:29,518 INFO [SyncThread:0] server.ZooKeeperServer: Established session 0x146413f6c3a000c with negotiated timeout 90000 for client /192.168.1.106:58172
I solved the problem replacing the downloaded binary version 4.0 of phoenix with the snapshot version 4.1 which I built by myown from the source version cloned via git from
http://git.apache.org/incubator-phoenix.git/
After the successful build I extracted the tarball from the assembly subdirectory and copied the following jars to hbase 0.98's lib dir
phoenix-core-4.1.0-incubating-SNAPSHOT.jar
phoenix-flume-4.1.0-incubating-SNAPSHOT.jar
phoenix-pig-4.1.0-incubating-SNAPSHOT.jar
In Squirrel I used just phoenix-4.1.0-incubating-SNAPSHOT-client.jar as a extra path to get the driver running.

Hadoop YARN setup authentication issue

I follow the instruction of this page to install single machine yarn cluster http://hadoop.apache.org/docs/r2.0.5-alpha/hadoop-project-dist/hadoop-common/SingleCluster.html
But when I run the example jar, the job hang there and I check the log of resource manager, find the following error (the first is client side log, the second is resource manager log )
(Client side)
13/10/18 17:30:36 ERROR security.UserGroupInformation: PriviledgedActionException as:zhangj82 (auth:SIMPLE) cause:java.io.IOException
java.io.IOException
at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:326)
at org.apache.hadoop.mapred.ClientServiceDelegate.getJobStatus(ClientServiceDelegate.java:385)
at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:526)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:313)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:310)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478)
at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:310)
at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:594)
at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1277)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1239)
at org.apache.hadoop.examples.RandomWriter.run(RandomWriter.java:283)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.RandomWriter.main(RandomWriter.java:294)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Resource Manager
2013-10-18 17:35:26,128 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8040: readAndProcess threw exception javax.security.sasl.SaslException: DIGEST-MD5: IO error acquiring password [Caused by org.apache.hadoop.security.token.SecretManager$InvalidToken: Can't de-serialize tokenIdentifier] from client 127.0.0.1. Count of bytes read: 0
javax.security.sasl.SaslException: DIGEST-MD5: IO error acquiring password [Caused by org.apache.hadoop.security.token.SecretManager$InvalidToken: Can't de-serialize tokenIdentifier]
at com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java:594)
at com.sun.security.sasl.digest.DigestMD5Server.evaluateResponse(DigestMD5Server.java:244)
at org.apache.hadoop.ipc.Server$Connection.saslReadAndProcess(Server.java:1173)
at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1350)
at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:726)
at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:525)
at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:500)
Caused by: org.apache.hadoop.security.token.SecretManager$InvalidToken: Can't de-serialize tokenIdentifier
at org.apache.hadoop.security.SaslRpcServer.getIdentifier(SaslRpcServer.java:112)
at org.apache.hadoop.security.SaslRpcServer$SaslDigestCallbackHandler.handle(SaslRpcServer.java:217)
at com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java:585)
... 6 more
Caused by: java.io.EOFException
at java.io.DataInputStream.readFully(DataInputStream.java:197)
at org.apache.hadoop.io.Text.readFields(Text.java:306)
at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.readFields(AbstractDelegationTokenIdentifier.java:186)
at org.apache.hadoop.security.SaslRpcServer.getIdentifier(SaslRpcServer.java:109)
... 8 more
2013-10-18 17:35:26,308 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1382088798449_0001_01_000001 Container Transitioned from ACQUIRED to RUNNING
This bug been raised in Hadoop issues. Basically to overcome this you may apply source level patch as described in BlockTokenSecretManager or try to update to latest version of Hadoop

Resources