Hadoop YARN job is getting stucked at map 0% and reduce 0% - hadoop

I am trying to run a very simple job to test my hadoop setup so I tried with Word Count Example , which get stuck in 0% , so i tried some other simple jobs and each one of them stuck
52191_0003/
14/07/14 23:55:51 INFO mapreduce.Job: Running job: job_1405376352191_0003
14/07/14 23:55:57 INFO mapreduce.Job: Job job_1405376352191_0003 running in uber mode : false
14/07/14 23:55:57 INFO mapreduce.Job: map 0% reduce 0%
I am using hadoop version- Hadoop 2.3.0-cdh5.0.2
I did quick research on Google and found to increase
yarn.scheduler.minimum-allocation-mb
yarn.nodemanager.resource.memory-mb
I am having single node cluster, running in my Macbook with dual core and 8 GB Ram.
my yarn-site.xml file -
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>resourcemanager.company.com</value>
</property>
<property>
<description>Classpath for typical applications.</description>
<name>yarn.application.classpath</name>
<value>
$HADOOP_CONF_DIR,
$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,
$HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,
$HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,
$HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*
</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>file:///data/1/yarn/local,file:///data/2/yarn/local,file:///data/3/yarn/local</value>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>file:///data/1/yarn/logs,file:///data/2/yarn/logs,file:///data/3/yarn/logs</value>
</property>
<property>
</property>
<name>yarn.log.aggregation.enable</name>
<value>true</value>
<property>
<description>Where to aggregate logs</description>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>hdfs://var/log/hadoop-yarn/apps</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
<description>shuffle service that needs to be set for Map Reduce to run </description>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</property>
<property>
<name>yarn.app.mapreduce.am.resource.mb</name>
<value>8092</value>
</property>
<property>
<name>yarn.app.mapreduce.am.command-opts</name>
<value>-Xmx768m</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<description>Execution framework.</description>
</property>
<property>
<name>mapreduce.map.cpu.vcores</name>
<value>4</value>
<description>The number of virtual cores required for each map task.</description>
</property>
<property>
<name>mapreduce.map.memory.mb</name>
<value>8092</value>
<description>Larger resource limit for maps.</description>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>-Xmx768m</value>
<description>Heap-size for child jvms of maps.</description>
</property>
<property>
<name>mapreduce.jobtracker.address</name>
<value>jobtracker.alexjf.net:8021</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>2048</value>
<description>Minimum limit of memory to allocate to each container request at the Resource Manager.</description>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>8092</value>
<description>Maximum limit of memory to allocate to each container request at the Resource Manager.</description>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-vcores</name>
<value>2</value>
<description>The minimum allocation for every container request at the RM, in terms of virtual CPU cores. Requests lower than this won't take effect, and the specified value will get allocated the minimum.</description>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-vcores</name>
<value>10</value>
<description>The maximum allocation for every container request at the RM, in terms of virtual CPU cores. Requests higher than this won't take effect, and will get capped to this value.</description>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>2048</value>
<description>Physical memory, in MB, to be made available to running containers</description>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>4</value>
<description>Number of CPU cores that can be allocated for containers.</description>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
<description>shuffle service that needs to be set for Map Reduce to run </description>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
my mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
has only 1 property.
tried several permutation and combinations but couldn't get rid of the error.
Log of the job
23:55:55,694 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-07-14 23:55:55,697 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-07-14 23:55:55,699 INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8030
2014-07-14 23:55:55,769 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: maxContainerCapability: 8092
2014-07-14 23:55:55,769 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue: root.abhishekchoudhary
2014-07-14 23:55:55,775 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper limit on the thread pool size is 500
2014-07-14 23:55:55,777 INFO [main] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: yarn.client.max-nodemanagers-proxies : 500
2014-07-14 23:55:55,787 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1405376352191_0003Job Transitioned from INITED to SETUP
2014-07-14 23:55:55,789 INFO [CommitterEvent Processor #0] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_SETUP
2014-07-14 23:55:55,800 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1405376352191_0003Job Transitioned from SETUP to RUNNING
2014-07-14 23:55:55,823 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1405376352191_0003_m_000000 Task Transitioned from NEW to SCHEDULED
2014-07-14 23:55:55,824 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1405376352191_0003_m_000001 Task Transitioned from NEW to SCHEDULED
2014-07-14 23:55:55,824 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1405376352191_0003_m_000002 Task Transitioned from NEW to SCHEDULED
2014-07-14 23:55:55,825 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1405376352191_0003_m_000003 Task Transitioned from NEW to SCHEDULED
2014-07-14 23:55:55,826 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1405376352191_0003_m_000000_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2014-07-14 23:55:55,827 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1405376352191_0003_m_000001_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2014-07-14 23:55:55,827 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1405376352191_0003_m_000002_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2014-07-14 23:55:55,827 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1405376352191_0003_m_000003_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2014-07-14 23:55:55,828 INFO [Thread-49] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: mapResourceReqt:8092
2014-07-14 23:55:55,858 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer setup for JobId: job_1405376352191_0003, File: hdfs://localhost/tmp/hadoop-yarn/staging/abhishekchoudhary/.staging/job_1405376352191_0003/job_1405376352191_0003_1.jhist
2014-07-14 23:55:56,773 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:4 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 HostLocal:0 RackLocal:0
2014-07-14 23:55:56,799 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1405376352191_0003: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:0, vCores:0> knownNMs=1

Based on the messsage Connecting to ResourceManager at /0.0.0.0:8030,
are you sure your ResourceManager is supposed to be at 0.0.0.0:8030 (the default)?
If not you should add the following to your yarn-site.xml:
<property>
<name>yarn.resourcemanager.hostname</name>
<value>MASTER ADDRESS</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>${yarn.resourcemanager.hostname}:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>${yarn.resourcemanager.hostname}:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>${yarn.resourcemanager.hostname}:8040</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>${yarn.resourcemanager.hostname}:8088</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>${yarn.resourcemanager.hostname}:8033</value>
</property>
Replace MASTER ADDRESS with the address of the master node. You can individually change the address of the resource manager's webapp, admin, etc.

Your settings appear to be incorrect.
The setting yarn.nodemanager.resource.memory-mb
is set to 2GB. This is the "amount of physical memory, in MB, that can be allocated for containers." But your mapreduce.map.memory.mb is 8GB. 8GB is what you're really requesting.
Additionally, you have set yarn.app.mapreduce.am.resource.mb to 8GB. As such, you're trying to allocate an AM which controls the job at 8GB plus several mappers at 8GB.
Solution
To solve the issue, you can drop the AM size to 1GB and then the mapper size to .5GB, which is a more reasonable size for playing around especially for word count.
Additional resources
You can refer to this instruction provided by Clouera to understand these properties in more detail.

I don't know if you simply made a copy/paste error when creating this question but looking at your yarn-site.xml it starts with two <property> tags. I'm not sure if Hadoop's xml parser will actually apply those nested <property> tags.

I am using Apache Hadoop version 2.7.2 so it might be like "apples-to-oranges" comparison, however I ran into the same silent stuck state the other day. In most of the cases this "silence" for an extended period of time indicates that the scheduler is not able to allocate enough resources to the application.
In my specific case with a similar configuration, increasing the value for property yarn.nodemanager.resource.memory-mb in yarn-site.xml did the trick.
You can also check other properties for resource allocation here

Related

Why does my MapReduce job get stuck at Map 0% Reduce 0%?

I'm testing a filesystem on Hadoop 3.3.1, and every time I run a MapReduce job it gets stuck at "map 0% reduce 0%", meanwhile everything (containers, applications, etc.) is stuck at "RUNNING". Does anybody know why? Or what should I do to find out what the real problem is?
Here's the console output:
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar wordcount wordcount/input wordcount/output
2022-02-27 15:17:21,136 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at master/192.168.1.1:8032
2022-02-27 15:17:21,731 INFO input.FileInputFormat: Total input files to process : 5
2022-02-27 15:17:21,761 INFO mapreduce.JobSubmitter: number of splits:5
2022-02-27 15:17:22,064 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1645945958451_0002
2022-02-27 15:17:22,064 INFO mapreduce.JobSubmitter: Executing with tokens: []
2022-02-27 15:17:22,241 INFO conf.Configuration: resource-types.xml not found
2022-02-27 15:17:22,242 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2022-02-27 15:17:22,498 INFO impl.YarnClientImpl: Submitted application application_1645945958451_0002
2022-02-27 15:17:22,539 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1645945958451_0002/
2022-02-27 15:17:22,540 INFO mapreduce.Job: Running job: job_1645945958451_0002
2022-02-27 15:17:27,644 INFO mapreduce.Job: Job job_1645945958451_0002 running in uber mode : false
2022-02-27 15:17:27,646 INFO mapreduce.Job: map 0% reduce 0%
Here is my yarn-site.xml configuration:
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>8</value>
</property>
<property>
<name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name>
<value>99.0</value>
</property>
<property>
<name>fs.AbstractFileSystem.madfs.impl</name>
<value>madfs.AbsXxxFS</value>
</property>
</configuration>
My mapred-site.xml configuration:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=/home/xyz/hadoop-3.3.1</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=/home/xyz/hadoop-3.3.1</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=/home/xyz/hadoop-3.3.1</value>
</property>
</configuration>
P.s. there's no ERROR nor WARNING in the logs dumped by ResourceManager and NodeManager. And the states of RMAppAttemptImpl, RMAppImpl, RMContainerImpl, ContainerMonitorImpl and ApplicationImpl are all stuck at RUNNING.
Here's the ResourceManager log hadoop-resourcemanager.out, and here's the NodeManager log hadoop-nodemanager.out.
Update:
The problem seems to be the application failed to preempt the required resources...
The app status I got using yarn app -status <appId>:
Application Report :
Application-Id : application_1645945958451_0003
Application-Name : QuasiMonteCarlo
Application-Type : MAPREDUCE
User : xyz
Queue : default
Application Priority : 0
Start-Time : 1645976932440
Finish-Time : 0
Progress : 0%
State : RUNNING
Final-State : UNDEFINED
Tracking-URL : http://master:45299
RPC Port : 46709
AM Host : master
Aggregate Resource Allocation : 997310 MB-seconds, 486 vcore-seconds
Aggregate Resource Preempted : 0 MB-seconds, 0 vcore-seconds
Log Aggregation Status : DISABLED
Diagnostics :
Unmanaged Application : false
Application Node Label Expression : <Not set>
AM container Node Label Expression : <DEFAULT_PARTITION>
TimeoutType : LIFETIME ExpiryTime : UNLIMITED RemainingTime : -1seconds
Also, only 1 container is allocated to this job, does that has anything to do with the problem?

hadoop3.3.1 start-yarn.sh failed

When I try to use Hadoop3.3.1, HDFS has been successfully run. When performing YARN services, I encountered the following issues, and the configuration file information is an error message as follows:
yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<!-- 开启RM高可用 -->
<!--property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property-->
<!-- 指定RM的cluster id -->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>tobias-yarn-test</value>
</property>
<!-- 指定RM的名字 -->
<!--
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
分别指定RM的地
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>192.168.7.166</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>192.168.7.116</value>
</property>
指定zk集群地址-->
<!--property>
<name>yarn.resourcemanager.zk-address</name>
<value>192.168.8.118:2181</value>
</property-->
<property>
<description>Comma separated list of Host:Port pairs. Each corresponds to a ZooKeeper server
(e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002") to be used by the RM for storing RM state.
This must be supplied when using org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
as the value for yarn.resourcemanager.store.class</description>
<name>hadoop.zk.address</name>
<value>192.168.8.118:2181</value>
</property>
<!-- 要运行MapReduce程序必须配置的附属服务-->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- 开启YARN集群的日志聚合功能 -->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!-- YARN集群的聚合日志最长保留时长 -->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>86400</value>
</property>
<!-- 启用自动恢复 -->
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>
<!-- 制定resourcemanager的状态信息存储在zookeeper集群上-->
<property>
<name>yarn.resourcemanager.store.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
<!-- 关闭yarn内存检查 -->
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.scheduler.maximun-allocation-vcores</name>
<value>4</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>1024</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>6144</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>8192</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_HOME,PATH,LANG,TZ,HADOOP_MAPRED_HOME</value>
</property>
</configuration>
错误内容
tail -100 ../../logs/hadoop-root-resourcemanager-CentOS7.log
defaultAppPriorityPerQueue = 0
priority = 0
maxLifetime = -1 seconds
defaultLifetime = -1 seconds
2022-01-20 09:38:58,105 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager: Initialized queue: root.default
2022-01-20 09:38:58,105 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager: Initialized queue: root
2022-01-20 09:38:58,107 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager: Initialized root queue root: numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>usedCapacity=0.0, numApps=0, numContainers=0
2022-01-20 09:38:58,107 INFO org.apache.hadoop.yarn.server.resourcemanager.placement.UserGroupMappingPlacementRule: Initialized queue mappings, override: false
2022-01-20 09:38:58,107 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.WorkflowPriorityMappingsManager: Initialized workflow priority mappings, override: false
2022-01-20 09:38:58,107 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.MultiNodeSortingManager: MultiNode scheduling is 'false', and configured policies are
2022-01-20 09:38:58,107 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized CapacityScheduler with calculator=class org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator, minimumAllocation=<<memory:1024, vCores:1>>, maximumAllocation=<<memory:8192, vCores:4>>, asynchronousScheduling=false, asyncScheduleInterval=5ms,multiNodePlacementEnabled=false
2022-01-20 09:38:58,109 INFO org.apache.hadoop.conf.Configuration: dynamic-resources.xml not found
2022-01-20 09:38:58,113 INFO org.apache.hadoop.yarn.server.resourcemanager.AMSProcessingChain: Initializing AMS Processing chain. Root Processor=[org.apache.hadoop.yarn.server.resourcemanager.DefaultAMSProcessor].
2022-01-20 09:38:58,113 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: disabled placement handler will be used, all scheduling requests will be rejected.
2022-01-20 09:38:58,113 INFO org.apache.hadoop.yarn.server.resourcemanager.AMSProcessingChain: Adding [org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.processor.DisabledPlacementProcessor] tp top of AMS Processing chain.
2022-01-20 09:38:58,118 INFO org.apache.hadoop.service.AbstractService: Service ResourceManager failed in state STARTED
org.apache.hadoop.service.ServiceStateException: org.apache.zookeeper.KeeperException$UnimplementedException: KeeperErrorCode = Unimplemented for /rmstore/ZKRMStateRoot/RMAppRoot/HIERARCHIES
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:203)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:935)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1420)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1461)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1457)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1457)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1508)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1699)
Caused by: org.apache.zookeeper.KeeperException$UnimplementedException: KeeperErrorCode = Unimplemented for /rmstore/ZKRMStateRoot/RMAppRoot/HIERARCHIES
at org.apache.zookeeper.KeeperException.create(KeeperException.java:106)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1637)
at org.apache.curator.framework.imps.CreateBuilderImpl$17.call(CreateBuilderImpl.java:1180)
at org.apache.curator.framework.imps.CreateBuilderImpl$17.call(CreateBuilderImpl.java:1156)
at org.apache.curator.connection.StandardConnectionHandlingPolicy.callWithRetry(StandardConnectionHandlingPolicy.java:64)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:100)
at org.apache.curator.framework.imps.CreateBuilderImpl.pathInForeground(CreateBuilderImpl.java:1153)
at org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:607)
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:597)
at org.apache.curator.framework.imps.CreateBuilderImpl$3.forPath(CreateBuilderImpl.java:362)
at org.apache.curator.framework.imps.CreateBuilderImpl$3.forPath(CreateBuilderImpl.java:310)
at org.apache.hadoop.util.curator.ZKCuratorManager.create(ZKCuratorManager.java:286)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.create(ZKRMStateStore.java:1436)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.startInternal(ZKRMStateStore.java:409)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.serviceStart(RMStateStore.java:825)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
... 12 more
2022-01-20 09:38:58,130 ERROR org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted
2022-01-20 09:38:58,137 INFO org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.w.WebAppContext#3337d04c{cluster,/,null,STOPPED}{jar:file:/fsmeeting/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-common-3.3.1.jar!/webapps/cluster}
2022-01-20 09:38:58,159 INFO org.eclipse.jetty.server.AbstractConnector: Stopped ServerConnector#ec0c838{HTTP/1.1, (http/1.1)}{0.0.0.0:8088}
2022-01-20 09:38:58,159 INFO org.eclipse.jetty.server.session: node0 Stopped scavenging
2022-01-20 09:38:58,160 INFO org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler#7c22d4f{static,/static,jar:file:/fsmeeting/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-common-3.3.1.jar!/webapps/static,STOPPED}
2022-01-20 09:38:58,163 INFO org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler#2484f433{logs,/logs,file:///fsmeeting/hadoop-3.3.1/logs/,STOPPED}
2022-01-20 09:38:58,187 INFO org.apache.hadoop.ipc.Server: Stopping server on 8033
2022-01-20 09:38:58,205 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioning to standby state
2022-01-20 09:38:58,206 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned to standby state
2022-01-20 09:38:58,208 ERROR org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting ResourceManager
org.apache.hadoop.service.ServiceStateException: org.apache.zookeeper.KeeperException$UnimplementedException: KeeperErrorCode = Unimplemented for /rmstore/ZKRMStateRoot/RMAppRoot/HIERARCHIES
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:203)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:935)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1420)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1461)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1457)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1457)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1508)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1699)
Caused by: org.apache.zookeeper.KeeperException$UnimplementedException: KeeperErrorCode = Unimplemented for /rmstore/ZKRMStateRoot/RMAppRoot/HIERARCHIES
at org.apache.zookeeper.KeeperException.create(KeeperException.java:106)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1637)
at org.apache.curator.framework.imps.CreateBuilderImpl$17.call(CreateBuilderImpl.java:1180)
at org.apache.curator.framework.imps.CreateBuilderImpl$17.call(CreateBuilderImpl.java:1156)
at org.apache.curator.connection.StandardConnectionHandlingPolicy.callWithRetry(StandardConnectionHandlingPolicy.java:64)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:100)
at org.apache.curator.framework.imps.CreateBuilderImpl.pathInForeground(CreateBuilderImpl.java:1153)
at org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:607)
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:597)
at org.apache.curator.framework.imps.CreateBuilderImpl$3.forPath(CreateBuilderImpl.java:362)
at org.apache.curator.framework.imps.CreateBuilderImpl$3.forPath(CreateBuilderImpl.java:310)
at org.apache.hadoop.util.curator.ZKCuratorManager.create(ZKCuratorManager.java:286)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.create(ZKRMStateStore.java:1436)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.startInternal(ZKRMStateStore.java:409)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.serviceStart(RMStateStore.java:825)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
... 12 more
2022-01-20 09:38:58,207 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2022-01-20 09:38:58,206 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8033
2022-01-20 09:38:58,235 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: SHUTDOWN_MSG:
This problem is based on Hadoop3.3.1 single-range pseudo-distributed existing, my ZooKeeper version is 3.4.9, is there a problem with such Yarn-Site.xml configuration, please help me Thank you

hadoop cluster is not running map reduce jobs - issue with scheduler

(this is a follow-up on a discussion I had regarding an earlier question I had on this matter)
I set up a small Hadoop cluster following these instructions but using Hadoop version 2.7.4. The cluster seems to work OK, but I cannot run mapreduce jobs. In particular, when trying the following
$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.4.jar randomwriter outdenter code here
the job prints
17/11/27 16:35:21 INFO client.RMProxy: Connecting to ResourceManager at
ec2-yyy.eu-central-
1.compute.amazonaws.com/xxx:8032
Running 0 maps.
Job started: Mon Nov 27 16:35:22 UTC 2017
17/11/27 16:35:22 INFO client.RMProxy: Connecting to ResourceManager at
ec2-yyy.eu-central-
1.compute.amazonaws.com/xxx:8032
17/11/27 16:35:22 INFO mapreduce.JobSubmitter: number of splits:0
17/11/27 16:35:22 INFO mapreduce.JobSubmitter: Submitting tokens for
job: job_1511799491035_0006
17/11/27 16:35:22 INFO impl.YarnClientImpl: Submitted application
application_1511799491035_0006
17/11/27 16:35:22 INFO mapreduce.Job: The url to track the job:
http://ec2-yyy.eu-central-
1.compute.amazonaws.com:8088/proxy/application_1511799491035_0006/
17/11/27 16:35:22 INFO mapreduce.Job: Running job:
job_1511799491035_0006
and never gets past this state.
In the job tracker, it says
ACCEPTED: waiting for AM container to be allocated, launched and
register with RM.
I then looked into the log files where I found
2017-11-27 13:50:29,202 INFO org.apache.hadoop.conf.Configuration: found resource capacity-scheduler.xml at file:/usr/local/hadoop/etc/hadoop/capacity-scheduler.xml
2017-11-27 13:50:29,252 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc mb per queue for root is undefined
2017-11-27 13:50:29,252 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc vcore per queue for root is undefined
2017-11-27 13:50:29,256 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: root, capacity=1.0, asboluteCapacity=1.0, maxCapacity=1.0, asboluteMaxCapacity=1.0, state=RUNNING, acls=ADMINISTER_QUEUE:*SUBMIT_APP:*, labels=*, reservationsContinueLooking=true
2017-11-27 13:50:29,256 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Initialized parent-queue root name=root, fullname=root
2017-11-27 13:50:29,265 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc mb per queue for root.default is undefined
2017-11-27 13:50:29,265 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc vcore per queue for root.default is undefined
which suggest that there is a problem with the capacity scheduler. The file capacity-scheduler.xml looks as follows:
<configuration>
<property>
<name>yarn.scheduler.capacity.maximum-applications</name>
<value>10000</value>
<description>
Maximum number of applications that can be pending and running.
</description>
</property>
<property>
<name>yarn.scheduler.capacity.maximum-am-resource-percent</name>
<value>0.1</value>
<description>
Maximum percent of resources in the cluster which can be used to run
application masters i.e. controls number of concurrent running
applications.
</description>
</property>
<property>
<name>yarn.scheduler.capacity.resource-calculator</name>
<value>org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator</value>
<description>
The ResourceCalculator implementation to be used to compare
Resources in the scheduler.
The default i.e. DefaultResourceCalculator only uses Memory while
DominantResourceCalculator uses dominant-resource to compare
multi-dimensional resources such as Memory, CPU etc.
</description>
</property>
<property>
<name>yarn.scheduler.capacity.root.queues</name>
<value>default</value>
<description>
The queues at the this level (root is the root queue).
</description>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.capacity</name>
<value>100</value>
<description>Default queue target capacity.</description>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.user-limit-factor</name>
<value>1</value>
<description>
Default queue user limit a percentage from 0.0 to 1.0.
</description>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.maximum-capacity</name>
<value>100</value>
<description>
The maximum capacity of the default queue.
</description>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.state</name>
<value>RUNNING</value>
<description>
The state of the default queue. State can be one of RUNNING or STOPPED.
</description>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.acl_submit_applications</name>
<value>*</value>
<description>
The ACL of who can submit jobs to the default queue.
</description>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.acl_administer_queue</name>
<value>*</value>
<description>
The ACL of who can administer jobs on the default queue.
</description>
</property>
<property>
<name>yarn.scheduler.capacity.node-locality-delay</name>
<value>40</value>
<description>
Number of missed scheduling opportunities after which the CapacityScheduler
attempts to schedule rack-local containers.
Typically this should be set to number of nodes in the cluster, By default is setting
approximately number of nodes in one rack which is 40.
</description>
</property>
<property>
<name>yarn.scheduler.capacity.queue-mappings</name>
<value></value>
<description>
A list of mappings that will be used to assign jobs to queues
The syntax for this list is [u|g]:[name]:[queue_name][,next mapping]*
Typically this list will be used to map users to queues,
for example, u:%user:%user maps all users to queues with the same name
as the user.
</description>
</property>
<property>
<name>yarn.scheduler.capacity.queue-mappings-override.enable</name>
<value>false</value>
<description>
If a queue mapping is present, will it override the value specified
by the user? This can be used by administrators to place jobs in queues
that are different than the one specified by the user.
The default is false.
</description>
</property>
</configuration>
I would be grateful for any hints on how to approach this?.
Thanks
c14
Every thing fine with Cluster configuration but When it comes to Job execution, RAM provided by t2.micro instance is not enough to run the MapReduce jobs, so better use bigger instances for cluster creation and job execution

hadoop cluster is not running mapreduce jobs

I set up a small Hadoop cluster following these instructions but using Hadoop version 2.7.4. The cluster seems to work OK, but I cannot run mapreduce jobs. In particular, when trying the following
$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.4.jar randomwriter outdenter code here
the job prints
17/11/27 16:35:21 INFO client.RMProxy: Connecting to ResourceManager at
ec2-yyy.eu-central-
1.compute.amazonaws.com/xxx:8032
Running 0 maps.
Job started: Mon Nov 27 16:35:22 UTC 2017
17/11/27 16:35:22 INFO client.RMProxy: Connecting to ResourceManager at
ec2-yyy.eu-central-
1.compute.amazonaws.com/xxx:8032
17/11/27 16:35:22 INFO mapreduce.JobSubmitter: number of splits:0
17/11/27 16:35:22 INFO mapreduce.JobSubmitter: Submitting tokens for
job: job_1511799491035_0006
17/11/27 16:35:22 INFO impl.YarnClientImpl: Submitted application
application_1511799491035_0006
17/11/27 16:35:22 INFO mapreduce.Job: The url to track the job:
http://ec2-yyy.eu-central-
1.compute.amazonaws.com:8088/proxy/application_1511799491035_0006/
17/11/27 16:35:22 INFO mapreduce.Job: Running job:
job_1511799491035_0006
and never gets past this state.
My yarn-site.xml looks as follows
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>ec2-yyy.eu-central-1.compute.amazonaws.com</value>
</property>
</configuration>
My mapred-site.xml looks as follows
<configuration>
<property>
<name>mapreduce.jobtracker.address</name>
<value>ec2-yyy.eu-central-1.compute.amazonaws.com:54311</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
Do you have an idea how I could approach this issue?
Thanks
c14

DataNode is not starting on hadoop multinode cluster

I removed contents from hadoop tmp directory, dropped current folder from namenode directory, formatted namenode but got an exception: org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException java.net.BindException: Port in use: localhost:0.
My configuration is as follows:
core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000/</value>
<description>NameNode URI</description>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///home/hduser/hdfs/namenode</value>
<description>NameNode directory for namespace and transaction logs storage.</description>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///home/hduser/hdfs/datanode</value>
<description>DataNode directory</description>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8050</value>
</property>
</configuration>
data node log
2017-01-20 16:27:21,927 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2017-01-20 16:27:23,346 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2017-01-20 16:27:23,444 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2017-01-20 16:27:23,444 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2017-01-20 16:27:23,448 INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner: Initialized block scanner with targetBytesPerSec 1048576
2017-01-20 16:27:23,450 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is master
2017-01-20 16:27:23,461 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
2017-01-20 16:27:23,491 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
2017-01-20 16:27:23,493 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2017-01-20 16:27:23,493 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Number threads for balancing is 5
2017-01-20 16:27:23,650 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2017-01-20 16:27:23,663 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2017-01-20 16:27:23,673 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined
2017-01-20 16:27:23,677 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2017-01-20 16:27:23,689 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2017-01-20 16:27:23,690 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2017-01-20 16:27:23,690 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2017-01-20 16:27:23,716 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: localhost:0
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:919)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:856)
at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.<init>(DatanodeHttpServer.java:104)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:760)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1112)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:429)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2374)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2261)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2308)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2485)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2509)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:914)
... 10 more
2017-01-20 16:27:23,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Shutdown complete.
2017-01-20 16:27:23,728 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
java.net.BindException: Port in use: localhost:0
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:919)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:856)
at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.<init>(DatanodeHttpServer.java:104)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:760)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1112)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:429)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2374)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2261)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2308)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2485)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2509)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:914)
... 10 more
2017-01-20 16:27:23,730 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2017-01-20 16:27:23,735 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at master/10.0.1.1
************************************************************/
Any help would be highly appreciated.

Resources