I use windows 10 and node manager also not starting correctly. I see the following errors:
Resource manager is not connecting and failing due to :
2021-07-07 11:01:52,473 ERROR delegation.AbstractDelegationTokenSecretManager: ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted
2021-07-07 11:01:52,493 INFO handler.ContextHandler: Stopped o.e.j.w.WebAppContext#756b58a7{/,null,UNAVAILABLE}{/cluster}
2021-07-07 11:01:52,504 INFO server.AbstractConnector: Stopped ServerConnector#633a2e99{HTTP/1.1,[http/1.1]}{0.0.0.0:8088}
2021-07-07 11:01:52,504 INFO handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler#7b420819{/static,jar:file:/F:/hadoop_new/share/hadoop/yarn/hadoop-yarn-common-3.2.1.jar!/webapps/static,UNAVAILABLE}
2021-07-07 11:01:52,507 INFO handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler#c9d0d6{/logs,file:///F:/hadoop_new/logs/,UNAVAILABLE}
2021-07-07 11:01:52,541 INFO ipc.Server: Stopping server on 8033
2021-07-07 11:01:52,543 INFO ipc.Server: Stopping IPC Server listener on 8033
2021-07-07 11:01:52,544 INFO resourcemanager.ResourceManager: Transitioning to standby state
2021-07-07 11:01:52,544 INFO ipc.Server: Stopping IPC Server Responder
2021-07-07 11:01:52,550 INFO resourcemanager.ResourceManager: Transitioned to standby state
2021-07-07 11:01:52,554 FATAL resourcemanager.ResourceManager: Error starting ResourceManager
org.apache.hadoop.service.ServiceStateException: 5: Access is denied.
and
2021-07-07 11:01:51,625 INFO recovery.RMStateStore: Storing RMDTMasterKey.
2021-07-07 11:01:52,158 INFO store.AbstractFSNodeStore: Created store directory :file:/tmp/hadoop-yarn-Abby/node-attribute
2021-07-07 11:01:52,186 INFO service.AbstractService: Service NodeAttributesManagerImpl failed in state STARTED
5: Access is denied.
at org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileWithMode0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileOutputStreamWithMode(NativeIO.java:595)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:246)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:232)
at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:331)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:320)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:305)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:987)
at org.apache.hadoop.yarn.nodelabels.store.AbstractFSNodeStore.recoverFromStore(AbstractFSNodeStore.java:160)
at org.apache.hadoop.yarn.server.resourcemanager.nodelabels.FileSystemNodeAttributeStore.recover(FileSystemNodeAttributeStore.java:95)
at org.apache.hadoop.yarn.server.resourcemanager.nodelabels.NodeAttributesManagerImpl.initNodeAttributeStore(NodeAttributesManagerImpl.java:140)
at org.apache.hadoop.yarn.server.resourcemanager.nodelabels.NodeAttributesManagerImpl.serviceStart(NodeAttributesManagerImpl.java:123)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:895)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1262)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1303)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1299)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1299)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1350)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1535)
2021-07-07 11:01:52,212 INFO service.AbstractService: Service RMActiveServices failed in state STARTED
org.apache.hadoop.service.ServiceStateException: 5: Access is denied.
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:203)
at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:895)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1262)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1303)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1299)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1299)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1350)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1535)
Caused by: 5: Access is denied.
at org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileWithMode0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileOutputStreamWithMode(NativeIO.java:595)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:246)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:232)
at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:331)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:320)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:305)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:987)
at org.apache.hadoop.yarn.nodelabels.store.AbstractFSNodeStore.recoverFromStore(AbstractFSNodeStore.java:160)
at org.apache.hadoop.yarn.server.resourcemanager.nodelabels.FileSystemNodeAttributeStore.recover(FileSystemNodeAttributeStore.java:95)
at org.apache.hadoop.yarn.server.resourcemanager.nodelabels.NodeAttributesManagerImpl.initNodeAttributeStore(NodeAttributesManagerImpl.java:140)
at org.apache.hadoop.yarn.server.resourcemanager.nodelabels.NodeAttributesManagerImpl.serviceStart(NodeAttributesManagerImpl.java:123)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
... 13 more
You have access denied, maybe need to run with another user. Try to start services with a user with more access like Administrator in windows.
Related
Summary
Installing Hadoop following this guide, everything goes fine until Step 7 (starting NameNode and DataNode) but when I'm trying Step 8 (starting NodeManager and ResourceManager) the two cmds open up but they fail with the following excpetions each.
nodemanager cmd:
2022-11-18 18:29:44,278 ERROR nodemanager.NodeManager: Error starting NodeManager
java.lang.ExceptionInInitializerError
at com.google.inject.internal.cglib.reflect.$FastClassEmitter.<init>(FastClassEmitter.java:67)
at com.google.inject.internal.cglib.reflect.$FastClass$Generator.generateClass(FastClass.java:72)
at com.google.inject.internal.cglib.core.$DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at com.google.inject.internal.cglib.core.$AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at com.google.inject.internal.cglib.reflect.$FastClass$Generator.create(FastClass.java:64)
at com.google.inject.internal.BytecodeGen.newFastClass(BytecodeGen.java:204)
at com.google.inject.internal.ProviderMethod$FastClassProviderMethod.<init>(ProviderMethod.java:256)
at com.google.inject.internal.ProviderMethod.create(ProviderMethod.java:71)
at com.google.inject.internal.ProviderMethodsModule.createProviderMethod(ProviderMethodsModule.java:275)
at com.google.inject.internal.ProviderMethodsModule.getProviderMethods(ProviderMethodsModule.java:144)
at com.google.inject.internal.ProviderMethodsModule.configure(ProviderMethodsModule.java:123)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:349)
at com.google.inject.AbstractModule.install(AbstractModule.java:122)
at com.google.inject.servlet.ServletModule.configure(ServletModule.java:52)
at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
at com.google.inject.spi.Elements.getElements(Elements.java:110)
at com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:138)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:104)
at com.google.inject.Guice.createInjector(Guice.java:96)
at com.google.inject.Guice.createInjector(Guice.java:73)
at com.google.inject.Guice.createInjector(Guice.java:62)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:387)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:432)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:428)
at org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:112)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:975)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:1054)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make protected final java.lang.Class java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain) throws java.lang.ClassFormatError accessible: module java.base does not "opens java.lang" to unnamed module #7c0c77c7
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Method.checkCanSetAccessible(Method.java:200)
at java.base/java.lang.reflect.Method.setAccessible(Method.java:194)
at com.google.inject.internal.cglib.core.$ReflectUtils$2.run(ReflectUtils.java:56)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:318)
at com.google.inject.internal.cglib.core.$ReflectUtils.<clinit>(ReflectUtils.java:46)
... 32 more
2022-11-18 18:29:44,286 INFO ipc.Server: Stopping server on 57727
2022-11-18 18:29:44,287 INFO ipc.Server: Stopping IPC Server listener on 0
2022-11-18 18:29:44,287 INFO ipc.Server: Stopping IPC Server Responder
2022-11-18 18:29:44,288 WARN monitor.ContainersMonitorImpl: org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl is interrupted. Exiting.
2022-11-18 18:29:44,297 INFO ipc.Server: Stopping server on 8040
2022-11-18 18:29:44,298 INFO ipc.Server: Stopping IPC Server listener on 8040
2022-11-18 18:29:44,298 INFO ipc.Server: Stopping IPC Server Responder
2022-11-18 18:29:44,299 WARN nodemanager.NodeResourceMonitorImpl: org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl is interrupted. Exiting.
2022-11-18 18:29:44,299 INFO localizer.ResourceLocalizationService: Public cache exiting
2022-11-18 18:29:44,299 INFO impl.MetricsSystemImpl: Stopping NodeManager metrics system...
2022-11-18 18:29:44,300 INFO impl.MetricsSystemImpl: NodeManager metrics system stopped.
2022-11-18 18:29:44,301 INFO impl.MetricsSystemImpl: NodeManager metrics system shutdown complete.
2022-11-18 18:29:44,301 INFO nodemanager.NodeManager: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NodeManager at my-computer-name/xxx.xxx.xxx.xxx
************************************************************/
resourcemanager cmd:
2022-11-18 18:29:43,321 FATAL resourcemanager.ResourceManager: Error starting ResourceManager
java.lang.ExceptionInInitializerError
at com.google.inject.internal.cglib.reflect.$FastClassEmitter.<init>(FastClassEmitter.java:67)
at com.google.inject.internal.cglib.reflect.$FastClass$Generator.generateClass(FastClass.java:72)
at com.google.inject.internal.cglib.core.$DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at com.google.inject.internal.cglib.core.$AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at com.google.inject.internal.cglib.reflect.$FastClass$Generator.create(FastClass.java:64)
at com.google.inject.internal.BytecodeGen.newFastClass(BytecodeGen.java:204)
at com.google.inject.internal.ProviderMethod$FastClassProviderMethod.<init>(ProviderMethod.java:256)
at com.google.inject.internal.ProviderMethod.create(ProviderMethod.java:71)
at com.google.inject.internal.ProviderMethodsModule.createProviderMethod(ProviderMethodsModule.java:275)
at com.google.inject.internal.ProviderMethodsModule.getProviderMethods(ProviderMethodsModule.java:144)
at com.google.inject.internal.ProviderMethodsModule.configure(ProviderMethodsModule.java:123)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:349)
at com.google.inject.AbstractModule.install(AbstractModule.java:122)
at com.google.inject.servlet.ServletModule.configure(ServletModule.java:52)
at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
at com.google.inject.spi.Elements.getElements(Elements.java:110)
at com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:138)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:104)
at com.google.inject.Guice.createInjector(Guice.java:96)
at com.google.inject.Guice.createInjector(Guice.java:73)
at com.google.inject.Guice.createInjector(Guice.java:62)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:387)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:432)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:1231)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1340)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1535)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make protected final java.lang.Class java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain) throws java.lang.ClassFormatError accessible: module java.base does not "opens java.lang" to unnamed module #222545dc
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Method.checkCanSetAccessible(Method.java:200)
at java.base/java.lang.reflect.Method.setAccessible(Method.java:194)
at com.google.inject.internal.cglib.core.$ReflectUtils$2.run(ReflectUtils.java:56)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:318)
at com.google.inject.internal.cglib.core.$ReflectUtils.<clinit>(ReflectUtils.java:46)
... 29 more
2022-11-18 18:29:43,329 INFO resourcemanager.ResourceManager: Transitioning to standby state
2022-11-18 18:29:43,329 INFO resourcemanager.ResourceManager: Transitioned to standby state
2022-11-18 18:29:43,330 INFO resourcemanager.ResourceManager: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down ResourceManager at my-computer-name/xxx.xxx.xxx.xxx
************************************************************/
Details of my Attempt
Using JDK 18.0.2
Environment variable JAVA_HOME is C:\PROGRA~1\Java\jdk-18.0.2 (because "Program Files" had some issues earlier)
I do not have yarn package manager installed
Reader's Note
In case there are important details missing let me know to add them.
Issue was actually the JDK version, I used the one on the guide and it worked just fine.
Recently I'm trying wordaccount through MapReduce in Hadoop2.7.1. But the job always stuck at map 0% reduce 0%. Here is all the information:
No configs found; falling back on auto-configuration
No configs specified for hadoop runner
Looking for hadoop binary in /usr/local/hadoop/bin...
Found hadoop binary: /usr/local/hadoop/bin/hadoop
Using Hadoop version 2.7.1
Looking for Hadoop streaming jar in /usr/local/hadoop...
Found Hadoop streaming jar: /usr/local/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.7.1.jar
Creating temp directory /tmp/wordaccount.xjj.20220524.013439.681080
uploading working dir files to hdfs:///user/xjj/tmp/mrjob/wordaccount.xjj.20220524.013439.681080/files/wd...
Copying other local files to hdfs:///user/xjj/tmp/mrjob/wordaccount.xjj.20220524.013439.681080/files/
Running step 1 of 1...
packageJobJar: [/tmp/hadoop-unjar3955585943094314924/] [] /tmp/streamjob2959762167969354976.jar tmpDir=null
Connecting to ResourceManager at /0.0.0.0:8032
Connecting to ResourceManager at /0.0.0.0:8032
Total input paths to process : 1
number of splits:2
Submitting tokens for job: job_1653356019342_0001
Submitted application application_1653356019342_0001
The url to track the job: http://master:8088/proxy/application_1653356019342_0001/
Running job: job_1653356019342_0001
Job job_1653356019342_0001 running in uber mode : false
map 0% reduce 0%
I entered the url and check the job, the content of the url is here enter image description here
Then I checked the resourcemanager-master.log:
2022-05-24 09:47:09,400 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Error cleaning master
java.net.ConnectException: Call From master/192.168.70.128 to master:36309 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
at org.apache.hadoop.ipc.Client.call(Client.java:1480)
at org.apache.hadoop.ipc.Client.call(Client.java:1407)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy32.stopContainers(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.stopContainers(ContainerManagementProtocolPBClientImpl.java:110)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.cleanup(AMLauncher.java:139)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: 拒绝连接
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
at org.apache.hadoop.ipc.Client.call(Client.java:1446)
... 9 more
2022-05-24 09:49:03,136 INFO logs: Aliases are enabled
and the nodemanager-master.log:
2022-05-24 09:35:00,684 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1653356019342_0001
2022-05-24 09:35:00,684 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_INIT for appId application_1653356019342_0001
2022-05-24 09:35:00,684 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got APPLICATION_INIT for service mapreduce_shuffle
2022-05-24 09:35:00,694 INFO org.apache.hadoop.mapred.ShuffleHandler: Added token for job_1653356019342_0001
2022-05-24 09:35:00,697 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1653356019342_0001
2022-05-24 09:35:00,697 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_INIT for appId application_1653356019342_0001
2022-05-24 09:35:00,697 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got APPLICATION_INIT for service mapreduce_shuffle
2022-05-24 09:35:00,697 INFO org.apache.hadoop.mapred.ShuffleHandler: Added token for job_1653356019342_0001
2022-05-24 09:35:00,698 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1653356019342_0001_01_000003 transitioned from LOCALIZING to LOCALIZED
2022-05-24 09:35:00,698 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1653356019342_0001_01_000002 transitioned from LOCALIZING to LOCALIZED
2022-05-24 09:35:00,735 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1653356019342_0001_01_000002 transitioned from LOCALIZED to RUNNING
2022-05-24 09:35:00,735 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Neither virutal-memory nor physical-memory monitoring is needed. Not running the monitor-thread
2022-05-24 09:35:00,737 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1653356019342_0001_01_000003 transitioned from LOCALIZED to RUNNING
2022-05-24 09:35:00,737 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Neither virutal-memory nor physical-memory monitoring is needed. Not running the monitor-thread
2022-05-24 09:35:00,743 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /usr/local/hadoop/tmp/nm-local-dir/usercache/xjj/appcache/application_1653356019342_0001/container_1653356019342_0001_01_000002/default_container_executor.sh]
2022-05-24 09:35:00,744 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /usr/local/hadoop/tmp/nm-local-dir/usercache/xjj/appcache/application_1653356019342_0001/container_1653356019342_0001_01_000003/default_container_executor.sh]
So what could be the problem? Connection refused or not enough memory? Thanks for your help.
When I tried to connect to Nifi UI using http://localhost:8080/nifi, i am getting below error
org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
java.net.BindException: Address already in use: bind
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:331)
at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:299)
at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:235)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.server.Server.doStart(Server.java:398)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:935)
at org.apache.nifi.NiFi.<init>(NiFi.java:158)
at org.apache.nifi.NiFi.<init>(NiFi.java:72)
at org.apache.nifi.NiFi.main(NiFi.java:297)
2020-02-27 11:51:11,834 INFO [Thread-1] org.apache.nifi.NiFi Initiating shutdown of Jetty web server...
2020-02-27 11:51:11,836 INFO [Thread-1] o.eclipse.jetty.server.AbstractConnector Stopped ServerConnector#355ee205{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
2020-02-27 11:51:11,837 INFO [Thread-1] org.eclipse.jetty.server.session node 0 Stopped scavenging
Can anyone suggest what is the cause of this issue?
Nifi version- 1.9.2,installed on windows machine
Here is the nifi status logs,
12:33:16.886 [main] DEBUG org.apache.nifi.bootstrap.NotificationServiceManager - Found 0 service elements
12:33:16.896 [main] INFO org.apache.nifi.bootstrap.NotificationServiceManager - Successfully loaded the following 0 services: []
12:33:16.897 [main] INFO org.apache.nifi.bootstrap.RunNiFi - Registered no Notification Services for Notification Type NIFI_STARTED
12:33:16.897 [main] INFO org.apache.nifi.bootstrap.RunNiFi - Registered no Notification Services for Notification Type NIFI_STOPPED
12:33:16.898 [main] INFO org.apache.nifi.bootstrap.RunNiFi - Registered no Notification Services for Notification Type NIFI_DIED
12:33:16.899 [main] DEBUG org.apache.nifi.bootstrap.Command - Status File:
12:33:16.900 [main] DEBUG org.apache.nifi.bootstrap.Command - Properties: {pid=9724}
Failed to determine if Process 9724 is running; assuming that it is not
12:33:16.902 [main] INFO org.apache.nifi.bootstrap.Command - Apache NiFi is not running
The port use by nifi is already used by another process.
you can change web server port in conf/nifi.properties
We have Created a High availability hadoop cluster with default file system as azure blob storage instead of hdfs by following the link https://hadoop.apache.org/docs/stable/hadoop-azure/index.html
Hivethrift service where started successfully but spark thift service where not started.
I can able to connect the spark-shell and connect with blob by referening the jar file hadoop-azure.jar but cannot start the thrift service.
Command used to start spark thrift server:
spark-submit --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --master yarn
Following are the error details.
17/04/26 10:19:32 INFO metastore: Connected to metastore.
Exception in thread "main" java.lang.IllegalArgumentException: Error while insta
ntiating 'org.apache.spark.sql.hive.HiveSessionState':
at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$
$reflect(SparkSession.scala:981)
at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSessio
n.scala:110)
at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:109
)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$getOrCreate$5.appl
y(SparkSession.scala:878)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$getOrCreate$5.appl
y(SparkSession.scala:878)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.sca
la:99)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.sca
la:99)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala
:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.sc
ala:878)
at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.
scala:47)
at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveTh
riftServer2.scala:81)
at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThr
iftServer2.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSub
mit$$runMain(SparkSubmit.scala:738)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:18
7)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct
orAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC
onstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$
$reflect(SparkSession.scala:978)
... 22 more
Caused by: java.lang.IllegalArgumentException: Error while instantiating 'org.ap
ache.spark.sql.hive.HiveExternalCatalog':
at org.apache.spark.sql.internal.SharedState$.org$apache$spark$sql$inter
nal$SharedState$$reflect(SharedState.scala:169)
at org.apache.spark.sql.internal.SharedState.<init>(SharedState.scala:86
)
at org.apache.spark.sql.SparkSession$$anonfun$sharedState$1.apply(SparkS
ession.scala:101)
at org.apache.spark.sql.SparkSession$$anonfun$sharedState$1.apply(SparkS
ession.scala:101)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession.sharedState$lzycompute(SparkSession
.scala:101)
at org.apache.spark.sql.SparkSession.sharedState(SparkSession.scala:100)
at org.apache.spark.sql.internal.SessionState.<init>(SessionState.scala:
157)
at org.apache.spark.sql.hive.HiveSessionState.<init>(HiveSessionState.sc
ala:32)
... 27 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct
orAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC
onstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.sql.internal.SharedState$.org$apache$spark$sql$inter
nal$SharedState$$reflect(SharedState.scala:166)
... 35 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct
orAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC
onstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(Is
olatedClientLoader.scala:264)
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.s
cala:366)
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.s
cala:270)
at org.apache.spark.sql.hive.HiveExternalCatalog.<init>(HiveExternalCata
log.scala:65)
... 40 more
Caused by: java.lang.RuntimeException: org.apache.hadoop.fs.azure.AzureException
: java.util.NoSuchElementException: An error occurred while enumerating the resu
lt, check the original exception for details.
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.jav
a:522)
at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl
.scala:192)
... 48 more
Caused by: org.apache.hadoop.fs.azure.AzureException: java.util.NoSuchElementExc
eption: An error occurred while enumerating the result, check the original excep
tion for details.
at org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.retrieveMetadat
a(AzureNativeFileSystemStore.java:1930)
at org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(Native
AzureFileSystem.java:1592)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(Sess
ionState.java:596)
at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(Sess
ionState.java:554)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.jav
a:508)
... 49 more
Caused by: java.util.NoSuchElementException: An error occurred while enumerating
the result, check the original exception for details.
at com.microsoft.azure.storage.core.LazySegmentedIterator.hasNext(LazySe
gmentedIterator.java:113)
at org.apache.hadoop.fs.azure.StorageInterfaceImpl$WrappingIterator.hasN
ext(StorageInterfaceImpl.java:128)
at org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.retrieveMetadat
a(AzureNativeFileSystemStore.java:1909)
... 54 more
Caused by: com.microsoft.azure.storage.StorageException: The server encountered
an unknown failure: OK
at com.microsoft.azure.storage.StorageException.translateException(Stora
geException.java:178)
at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(Exe
cutionEngine.java:273)
at com.microsoft.azure.storage.core.LazySegmentedIterator.hasNext(LazySe
gmentedIterator.java:109)
... 56 more
Caused by: java.lang.ClassCastException: org.apache.xerces.parsers.XIncludeAware
ParserConfiguration cannot be cast to org.apache.xerces.xni.parser.XMLParserConf
iguration
at org.apache.xerces.parsers.SAXParser.<init>(Unknown Source)
at org.apache.xerces.parsers.SAXParser.<init>(Unknown Source)
at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.<init>(Unknown Sou
rce)
at org.apache.xerces.jaxp.SAXParserImpl.<init>(Unknown Source)
at org.apache.xerces.jaxp.SAXParserFactoryImpl.newSAXParser(Unknown Sour
ce)
at com.microsoft.azure.storage.core.Utility.getSAXParser(Utility.java:54
6)
at com.microsoft.azure.storage.blob.BlobListHandler.getBlobList(BlobList
Handler.java:72)
at com.microsoft.azure.storage.blob.CloudBlobContainer$6.postProcessResp
onse(CloudBlobContainer.java:1253)
at com.microsoft.azure.storage.blob.CloudBlobContainer$6.postProcessResp
onse(CloudBlobContainer.java:1217)
at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(Exe
cutionEngine.java:148)
... 57 more
17/04/26 10:19:33 INFO SparkContext: Invoking stop() from shutdown hook
17/04/26 10:19:33 INFO SparkUI: Stopped Spark web UI at http://10.0.0.4:4040
17/04/26 10:19:33 INFO YarnClientSchedulerBackend: Interrupting monitor thread
17/04/26 10:19:33 INFO YarnClientSchedulerBackend: Shutting down all executors
17/04/26 10:19:33 INFO YarnSchedulerBackend$YarnDriverEndpoint: Asking each exec
utor to shut down
17/04/26 10:19:33 INFO SchedulerExtensionServices: Stopping SchedulerExtensionSe
rvices
(serviceOption=None,
services=List(),
started=false)
17/04/26 10:19:33 INFO YarnClientSchedulerBackend: Stopped
17/04/26 10:19:33 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEnd
point stopped!
17/04/26 10:19:33 INFO MemoryStore: MemoryStore cleared
17/04/26 10:19:33 INFO BlockManager: BlockManager stopped
17/04/26 10:19:33 INFO BlockManagerMaster: BlockManagerMaster stopped
17/04/26 10:19:33 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:
OutputCommitCoordinator stopped!
17/04/26 10:19:33 INFO SparkContext: Successfully stopped SparkContext
17/04/26 10:19:33 INFO ShutdownHookManager: Shutdown hook called
17/04/26 10:19:33 INFO ShutdownHookManager: Deleting directory C:\Users\labuser\
AppData\Local\Temp\2\spark-11c406ec-2c53-4042-b336-9d1164c3c6f9
17/04/26 10:19:33 INFO MetricsSystemImpl: Stopping azure-file-system metrics sys
tem...
17/04/26 10:19:33 INFO MetricsSystemImpl: azure-file-system metrics system stopp
ed.
17/04/26 10:19:33 INFO MetricsSystemImpl: azure-file-system metrics system shutd
own complete.
Please help me to resolve this issue. any help would be greatly appreciated.
I write my own Scheduler in Hadoop2.6.0 inherit AbstractYarnScheduler.
I compile successfully but when submit job in hadoop, RM broke down.
Here is the log in Master Node
2015-07-19 13:31:59,931 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1437327062733_0001 State change from NEW to NEW_SAVING
**2015-07-19 13:31:59,932 FATAL org.apache.hadoop.yarn.event.AsyncDispatcher: Error in dispatcher thread**
java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/proto/YarnServerResourceManagerServiceProtos$ApplicationStateDataProtoOrBuilder
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2013)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1978)
at org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:56)
at org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.records.ApplicationStateData.newInstance(ApplicationStateData.java:43)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.records.ApplicationStateData.newInstance(ApplicationStateData.java:56)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$StoreAppTransition.transition(RMStateStore.java:131)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$StoreAppTransition.transition(RMStateStore.java:1)
at org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362)
at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.handleStoreEvent(RMStateStore.java:787)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:839)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:1)
at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173)
at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.yarn.proto.YarnServerResourceManagerServiceProtos$ApplicationStateDataProtoOrBuilder
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 20 more
2015-07-19 13:31:59,934 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Exiting, bbye..
2015-07-19 13:31:59,937 ERROR org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted
2015-07-19 13:31:59,938 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup#0.0.0.0:8088
2015-07-19 13:31:59,938 ERROR org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted
2015-07-19 13:31:59,938 ERROR org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted
2015-07-19 13:31:59,944 WARN org.apache.hadoop.ipc.Server: IPC Server handler 2 on 8032, call org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.getApplicationReport from 129.10.58.155:50992 Call#42 Retry#0
java.lang.NoSuchMethodError: org.apache.hadoop.yarn.server.utils.BuilderUtils.newApplicationResourceUsageReport(IILorg/apache/hadoop/yarn/api/records/Resource;Lorg/apache/hadoop/yarn/api/records/Resource;Lorg/apache/hadoop/yarn/api/records/Resource;)Lorg/apache/hadoop/yarn/api/records/ApplicationResourceUsageReport;
at org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.<clinit>(RMServerUtils.java:237)
at org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.createAndGetApplicationReport(RMAppImpl.java:520)
at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:296)
at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:170)
at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:401)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
2015-07-19 13:32:00,039 INFO org.apache.hadoop.ipc.Server: Stopping server on 8032
2015-07-19 13:32:00,040 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8032
2015-07-19 13:32:00,040 INFO org.apache.hadoop.ipc.Server: Stopping server on 8033
2015-07-19 13:32:00,041 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8033
2015-07-19 13:32:00,041 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2015-07-19 13:32:00,041 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioning to standby state
2015-07-19 13:32:00,041 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping ResourceManager metrics system...
2015-07-19 13:32:00,042 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2015-07-19 13:32:00,042 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ResourceManager metrics system stopped.
2015-07-19 13:32:00,042 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ResourceManager metrics system shutdown complete.
2015-07-19 13:32:00,042 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: AsyncDispatcher is draining to stop, igonring any new events.
2015-07-19 13:32:01,042 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Waiting for AsyncDispatcher to drain.
2015-07-19 13:32:02,043 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Waiting for AsyncDispatcher to drain.
2015-07-19 13:32:03,043 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Waiting for AsyncDispatcher to drain.
2015-07-19 13:32:04,043 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Waiting for AsyncDispatcher to drain.
2015-07-19 13:32:05,044 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Waiting for AsyncDispatcher to drain.
2015-07-19 13:32:06,044 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Waiting for AsyncDispatcher to drain.
2015-07-19 13:32:07,044 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Waiting for AsyncDispatcher to drain.
2015-07-19 13:32:08,044 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Waiting for AsyncDispatcher to drain.
According to the error information, the system can't find the class org.apache.hadoop.yarn.proto.YarnServerResourceManagerServiceProtos$ApplicationStateDataProtoOrBuilder, do you add the class to the classpath?
I think it could be some user session/permission issue, I got same error in my standalone instance (Ubuntu Desktop 16 LTE, jdk1.8.92 & Hadoop 2.7.2). It works normal again if I restart my machine & start over again. it keeps popping up same error if I just re-login, restart daemons, resubmit the job on the same terminal session.
Were you able to fix this issue?
Steps to reproduce on my machine are:
(1)Start terminal, login (on terminal) to hadoop dedicated user hduser (a sudo user) using command: su hduser
(2)start hadoop daemons using commands: start-dfs.sh & start-yarn.sh
(3)I can see all processes with jps command.
(4)Few MR jobs completed successfully. I submit same job after about 10-15 min.
(5)hduser user session is thrown out & land in regular desktop user session.