AWS EMR S3DistCp: The auxService:mapreduce_shuffle does not exist - hadoop

I am connected to an AWS EMR v5.4.0 instance over SSH and I want to call s3distcp. This link demonstrates how to setup an emr step to call it, but when I run it I get the following error:
Container launch failed for container_1492469375740_0001_01_000002 : org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:155)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:390)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I followed the instructions here but it still didn't work.

It turns out I needed to restart the yarn nodemanager service after configuring mapreduce_shuffle:
$ initctl list | grep yarn
hadoop-yarn-resourcemanager start/running, process 1256
hadoop-yarn-proxyserver start/running, process 702
hadoop-yarn-nodemanager start/running, process 896
$ sudo stop hadoop-yarn-nodemanager
$ sudo start hadoop-yarn-nodemanager
Also, in case it helps the yarn-site.xml file was located at: /etc/hadoop/conf/yarn-site.xml. It already had an entry for yarn.nodemanager.aux-services but mapreduce_shuffle wasn't configured:
<property>
<name>yarn.nodemanager.aux-services</name>
<value>spark_shuffle,</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.spark_shuffle.class</name>
<value>org.apache.spark.network.yarn.YarnShuffleService</value>
</property>
So I added it like this:
<property>
<name>yarn.nodemanager.aux-services</name>
<value>spark_shuffle,mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.spark_shuffle.class</name>
<value>org.apache.spark.network.yarn.YarnShuffleService</value>
</property>

Related

InvalidAuxServiceException in MapReduce Job

I am getting the following exception while running a map-reduce job on recently created open source hadoop cluster. I am running the latest hadoop version 3.3.0.
2020-09-03 00:58:30,068 INFO mapreduce.Job: Task Id : attempt_1599094453872_0001_m_000000_2, Status : FAILED
Container launch failed for container_1599094453872_0001_01_000004 : org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:83)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:57)
at java.lang.reflect.Constructor.newInstance(Constructor.java:437)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateExceptionImpl(SerializedExceptionPBImpl.java:171)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:182)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:163)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:394)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.lang.Thread.run(Thread.java:820)
As per some of the online suggestions, I have added the following two properties in yarn-site.xml and restarted both yarn and dfs. However, it is still throwing the same exception as above. Sometimes the job succeeds with the exception.
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

Hbase configuration with Hdfs HA

I am trying to setup hbase ha with hadoop ha. I have done setting hadoop ha setup and tested it.
But in hbase setup, while starting, I am getting this following error
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster.
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2426)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:231)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:137)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2436)
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: hdfs-nameservice
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:258)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:153)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:602)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:547)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:139)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:1003)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:570)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:381)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2419)
... 5 more
Caused by: java.net.UnknownHostException: hdfs-nameservice
... 25 more
I think my hbase setup doesn't recognize my nameservice hdfs-nameservice.
I am using Hbase 1.2.4 & Hadoop 2.7.3.
My hbase-site.xml has
<property>
<name>hbase.rootdir</name>
<value>hdfs://hdfs-nameservice/hbase</value>
</property>
core-site.xml has
<property>
<name>fs.defaultFS</name>
<value>hdfs://hdfs-nameservice:8020</value>
</property>
and hdfs-site.xml has
<property>
<name>dfs.nameservices</name>
<value>hdfs-nameservice</value>
</property>
<property>
<name>dfs.ha.namenodes.hdfs-nameservice</name>
<value>namenode1,namenode2</value>
</property>
//with rpc, http address confs for both namenode1&2
Tried:
1. Copied both the core and hdfs-site xmls in hbase conf directory.
2. Added Hadoop conf path in both environmental variable and hbase-env.sh
Still couldn't figure out how to make my hbase recognize hdfs-nameservice.
Any help would be much appreciated.

secondarynamenode on master and Datanode not start on slave hadoop 2.6.0

when i start hadoop using start-all.sh after that datanode and secondarynamenode not up on server and on slave datanode not starting.
when i troubleshoot using hdfs datanode get this error
15/06/29 11:06:34 INFO datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
15/06/29 11:06:35 WARN common.Util: Path /var/lib/hadoop/hdfs/datanode should be specified as a URI in configuration files. Please update hdfs configuration.
15/06/29 11:06:35 FATAL datanode.DataNode: Exception in secureMain
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
at org.apache.hadoop.security.Groups.<init>(Groups.java:70)
at org.apache.hadoop.security.Groups.<init>(Groups.java:66)
at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:271)
at org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:299)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2152)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2202)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2378)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2402)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:129)
... 9 more
Caused by: java.lang.UnsatisfiedLinkError: org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
at org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative(Native Method)
at org.apache.hadoop.security.JniBasedUnixGroupsMapping.<clinit>(JniBasedUnixGroupsMapping.java:49)
at org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.<init>(JniBasedUnixGroupsMappingWithFallback.java:39)
... 14 more
15/06/29 11:06:35 INFO util.ExitUtil: Exiting with status 1
15/06/29 11:06:35 INFO datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at localserver39/10.200.208.28
what is issue with my datanode on slave and on master secondarynamenode ?
start-dfs.sh on master
get this as output
hadoop#10.200.208.29's password: 10.200.208.28: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-localserver39.out
10.200.208.28: nice: /usr/libexec/../bin/hdfs: No such file or directory
Starting secondary namenodes [0.0.0.0]
hadoop#0.0.0.0's password:
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-secondarynamenode-MC-RND-1.out
After Jps get this
bash-3.2$ jps
8103 Jps
7437 DataNode
7309 NameNode
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://10.200.208.29:9000/</value>
</property>
</configuration>
hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/Backup-HDD/hadoop/datanode</value>
</property>
<property>
<name>dfs.namenode.data.dir</name>
<value>/Backup-HDD/hadoop/namenode</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/Backup-HDD/hadoop/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/Backup-HDD/hadoop/datanode</value>
</property>
Remove the below properties from hdfs-site.xml,
<property>
<name>dfs.datanode.data.dir</name>
<value>/Backup-HDD/hadoop/datanode</value>
</property>
<property>
<name>dfs.namenode.data.dir</name>
<value>/Backup-HDD/hadoop/namenode</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/Backup-HDD/hadoop/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/Backup-HDD/hadoop/datanode</value>
</property>
Add the below two properties in hdfs-site.xml
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/user/Backup-HDD/hadoop/datanode</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/user/Backup-HDD/hadoop/namenode</value>
</property>
Make sure path specified in the name and data dir are exists in you system.
Problem Solved after search on google
Update .bashrc and .bash_profile
cat .bashrc
#!/bin/bash
unset all HADOOP environment variables
env | grep HADOOP | sed 's/.(HADOOP[^=])=.*/\1/' > un_var
while read line; do unset "$line"; done < un_var
rm un_var
export JAVA_HOME="/usr/java/latest/"
export HADOOP_PREFIX="/home/hadoop/hadoop"
export HADOOP_YARN_USER="hadoop"
export HADOOP_HOME="$HADOOP_PREFIX"
export HADOOP_CONF_DIR="$HADOOP_PREFIX/etc/hadoop"
export HADOOP_PID_DIR="$HADOOP_PREFIX"
export HADOOP_LOG_DIR="$HADOOP_PREFIX/logs"
export HADOOP_OPTS="$HADOOP_OPTS -Djava.io.tmpdir=$HADOOP_PREFIX/tmp"
export YARN_HOME="$HADOOP_PREFIX"
export YARN_CONF_DIR="$HADOOP_PREFIX/etc/hadoop"
export YARN_PID_DIR="$HADOOP_PREFIX"
export YARN_LOG_DIR="$HADOOP_PREFIX/logs"
export YARN_OPTS="$YARN_OPTS -Djava.io.tmpdir=$HADOOP_PREFIX/tmp"
cat .bash_profile
#!/bin/bash
if [ -f ~/.bashrc ]; then
source ~/.bashrc
fi
Issue with Bash Profile

The node /hbase is not in ZooKeeper

I am a newbie in Hadoop trying to install Hbase in pseudo distributed mode, version hbase-0.98.10.1-hadoop1-bin, with Hadoop 2.5.2 . I am not able to add a table.
Following error continues when I try to create a table :
client.HConnectionManager$HConnectionImplementation: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
It finally after displaying the error many times (about 50 times) gives the final error as:
ERROR: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Latest entry of log file is:
2015-02-23 16:38:39,456 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3017)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:186)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:135)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3031)
Caused by: org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4
at org.apache.hadoop.ipc.Client.call(Client.java:1113)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy8.getProtocolVersion(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:622)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at com.sun.proxy.$Proxy8.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
at org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:100)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1446)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1464)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:263)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:942)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:533)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:534)
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3012)
... 5 more
Hdbase-site Configuration file:
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:54310/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/local/Hbase/zookeeper</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
</configuration>
Output of jps is:
7584 Main
8532 HQuorumPeer
4435 SecondaryNameNode
4880 NodeManager
4269 DataNode
7735 FsShell
4592 ResourceManager
4141 NameNode
9128 Jps
3147 ZKServerTool
3651 HRegionServer
2992 HMaster
What could be the possible error? Any help is appreciated.
It Just Worked after using a different hbase version. I was using hbase-0.98.10.1-hadoop1-bin, which was not compatible with my hadoop 2.5.2, so I changed hbase version to hbase-X.XX.XX.X-hadoop2-bin (which was compatible for hadoop 2.X ) and followed Apache's installation steps.
Thank you all..
It means the zookeeper has not the node '/hbase',so create a node name '/hbase' in zookeeper.go to the zkCli and run the 'create /hbase "" ' command.
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:54310/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/local/Hbase/zookeeper</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>zookeeper.znode.parent</name>
<value>/hbase-unsecure</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>
<property>
<name>hbase.master</name>
<value>hadoop-master:60000</value>
</property>
</configuration>
make sure zookeeper should be start and dataDir should be exist.
i use it in Talend,when i add hbase connnection in the hadoop cluster,
the step 2/2, i choose repository in hadoop cluster,when i write server and port(default 2181),point the button 'Check',
the system prompt:
Connection failure. You must change the Database Settings.
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master
then,you need add hadoop properties,click the button and add to the property
zookeeper.znode.parent=/hbase-unsecure
click ok,then you click 'Check',the system prompt successfull.
Above answer is correct but a bit more lengthy. I was able to solve this problem by just adding following property in hbase-site.xml ( used hbase-1.2.1 )
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/mnt/data/zookeeper</value>
</property>
I didnt had to create /mnt/data/zookeeper either. Since I was using HBASE as a standalone I didnt had to run Zookeeper infact it gave an error when I did so
the complete hbase-site.xml configuration file looks like
<configuration>
<property>
<name>hbase.rootdir</name>
<value>file:///mnt/data/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/mnt/data/zookeeper</value>
</property>
</configuration>

YARN ResourceTrackerService failed in state STARTED

I am trying to setup a hadoop cluster on a few machines with Hadoop Directory on a shared disk. HDFS worked well. But when I try to start YARN, ResourceTracker throws a BindException. The node (ahti.d.umn.edu-131.212.41.9) on which ResourceTracker is cofigured to run is reachable (I can SSH into it) and the port (28025) is also open.
org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService failed in state STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.BindException: Problem binding to [ahti.d.umn.edu:28025] java.net.BindException: Cannot assign requested address; For more details see: http://wiki.apache.org/hadoop/BindException
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.BindException: Problem binding to [ahti.d.umn.edu:28025] java.net.BindException: Cannot assign requested address; For more details see: http://wiki.apache.org/hadoop/BindException
at org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl.getServer(RpcServerFactoryPBImpl.java:139)
at org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC.getServer(HadoopYarnProtoRPC.java:65)
at org.apache.hadoop.yarn.ipc.YarnRPC.getServer(YarnRPC.java:54)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService.serviceStart(ResourceTrackerService.java:159)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:503)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:898)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:938)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:935)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:935)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:979)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1104)
Following is my yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>131.212.41.9</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>131.212.41.9:28025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>131.212.41.9:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>131.212.41.9:8050</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>131.212.41.9:8041</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/scratch/dfs/yarn</value>
</property>
<property>
<name>yarn.log.dir</name>
<value>/scratch/hadoop/yarn/logs</value>
</property>
</configuration>
If it matters I am running java-8.
Any clues on how to fix it?
Looks like it could be because of two reasons
May be some other instance of Resource manager already running that uses the port. Kill that Resource manager instance and start again. Find the process id of resource manager using the command ps aux | grep -i resourcemanager, then kill the same using the command kill -9 <RESOURCE_MANAGER_PID>
Hadoop doesn't fully support JDK-8. See the link for Hadoop supported Java versions, If option 1 is not working, try downgrade your java version to JDK7

Resources