I have a very small 2 node Hadoop-HBase cluster. I am executing MapReduce jobs on it. I use Hadoop-2.5.2. I have 32GB(nodes have 64GB memory each) free for MapReduce in each node with the configuration in yarn site as follows
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>32768</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>15</value>
</property>
My resource requirements are 2GB for each mapper/reducer that gets executed. I have configured this in the mapred-site.xml Given these configurations, with a total of about 64GB in memory and 30 vcores, I see about 31 mappers or 31 reducers getting executed in parallel.
While all this is fine, there is one part that I am trying to figure out. The number of mappers or reducers executing in parallel, is not the same on both nodes, one of the nodes has higher number of tasks than the other. Why does this happen? Can this be controlled? If so, how?
I suppose YARN does not see this as resources of a node rather resources of a cluster and spawns the tasks wherever it can in the cluster. Is this understanding correct? If not, what is the correct explanation to the said behaviour during a MR execution?
Related
I am running a compute-intensive, hadoop-based, map-reduce application. I have configured hadoop to use as few threads as possible, but multiple concurrent deployments lead to an increase of the execution time of the application.
I cannot find the cause of this increase in execution time, so there must be a bottleneck that I have not discovered and/or a configuration parameter that I have missed.
Testbed
My testbed consists of 3 Dell PowerEdge R630, each with an Intel Xeon E5-2630v3: 8 cores, 2 threads/core. These machines are located in the same 10 Gbps cluster, interconnected by the same switch. These will be referred to as M1, M2, M3.
Hadoop Configuration
I am running hadoop-1.2.1 on java-1.6.0-openjdk-amd64. I have configured hadoop to use the smallest possible number of threads. Here is my mapred-site.xml configuration:
<configuration>
<property>
<name>mapred.map.tasks</name>
<value>1</value>
</property>
<property>
<name>mapred.tasktracker.map.tasks.maximum</name>
<value>1</value>
</property>
<property>
<name>mapred.reduce.tasks</name>
<value>1</value>
</property>
<property>
<name>mapred.tasktracker.reduce.tasks.maximum</name>
<value>1</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>10.0.0.1:9001</value>
</property>
<property>
<name>mapred.map.tasks.speculative.execution</name>
<value>false</value>
</property>
<property>
<name>mapred.reduce.tasks.speculative.execution</name>
<value>false</value>
</property>
<property>
<name>tasktracker.http.threads</name>
<value>2</value>
</property>
<property>
<name>mapred.reduce.parallel.copies</name>
<value>2</value>
</property>
</configuration>
Deployment
The actual deployment takes place on containers, spawned via nova-docker. In each deployment I am spawning 3 containers, C1, C2 and C3, with 1 container per physical machine. Let's assume that C1 is spawned on M1, C2 on M2, C3 on M3.
In particular:
One container, C1, acts as the "Master"; it runs the Namenode and the Jobtracker services.
The other two containers, C2 and C3, act as the "Slaves", they run the Datanode and the Tasktracker services.
I have run this experiment twice:
One concurrent deployment
Two concurrent deployments
"Two concurrent" deployments means that there are two identical deployments, running concurrently. To further clarify, when two deployments are running, there are six containers present:
- C1a and C1b on M1
- C2a and C2b on M2
- C3a and C3b on M3
C1a, C2a and C3a belong to the same map-reduce execution and communicate with each other, as expected. Same goes for the containers C1b, C2b and C3b, respectively.
Execution time
Both cases (1 concurrent deployment, 2 concurrent deployments), were run 10 times, to get a good sample. Here is the execution time with 1 and 2 concurrent deployments; as it is evident, with 2 concurrent deployments the execution time rises by 6.72%.
Issue
My question is: why is the execution time longer when running two concurrent deployments, even though I have configured hadoop to use as few threads as possible? In particular:
Could I be PCIe-bottlenecked or CPU-bottlenecked? (see below)
Have I missed something else in configuring hadoop to use as few threads as possible?
Is hadoop using more threads than the ones I am aware of, that could be congesting the CPU or another resource?
I have already investigated the following:
Bandwidth consumption: we are definitively not network-bottlenecked. The network can sustain up to 10 Gbps, the application is not consuming more than 400-500 Mbps, on average, and there is nobody else using the cluster.
PCIe: I have already measured the PCIe bandwidth to investigate whether I am bottlenecked there. I have opened a related question on Superuser to ask whether my readings indicate a congested PCI or not.
CPU utilization: please see the next section.
CPU metrics
I installed the PCM tools to measure the CPU utilization during the executions. These tools were installed on one of the physical machines that hosts the slave containers (Datanode, Tasktracker).
I measured the utilization for cores in active state for the following cases:
Idle (labeled "0 tenants")
1 concurrent deployment (labeled "1 tenant")
2 concurrent deployments (labeled "2 tenants")
As it is evident, the CPU utilization for 1 or 2 concurrent deployments is similar, albeit for 1 deployment is slightly higher on average. Therefore, CPU utilization does not seem to be an issue; what could I be missing?
Please let me know in the comments whether I could provide any additional information.
To answer my own question, the eventual bottleneck is I/O bandwidth when writing to disk. With the help of iotop I measured the writing speed:
And with dd I measured the maximum writing speed:
# dd if=/dev/zero of=diskbench bs=1G count=1 conv=fdatasync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 7.38756 s, 145 MB/s
The writing speed seems to be constant around 10 MB/sec, reaching quite often up to 120-160 MB/sec. A natural question would be "why do we have continuous writing to the disk?" It's how hadoop works: the mappers are writing their intermediate output to the local disk, but not the HDFS, as it has been discussed here.
Therefore, since the mappers write continuously to the local hard disk, a bottleneck is expected there when multiple hadoop executions are run, even if we have CPU processing power to spare.
I have deployed 6 node (1 Master and 5 Slaves) hadoop cluster on amazon ec2 by using T2.Medium instances( each instance have 4 Gb ram and two virtual cores). In the yarn-site.xml,I have added these two properties:
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>3072</value>
</property>
This allows nodemanager to use maximum of 3 GB ram on each node. Leaving 1 Gb ram for the system.
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>1536</value>
</property>
And this gives minimum ram to be used by each container. I have used these properties so that maximum of two containers will run parallely on each node.
But When I run the program, the slave node on which Mapreduce Application Master is running (MRAppMaster), there is no other task (container running). Although ram used by the Application Master is only about 10% of total (i.e 400-500 Mb).
Whereas on other slave nodes, each node have two containers running. This degrades the performance. Because the program is compute intensive, and all the processing power of node where Application Master is running, is unused( as Application Master only use about 0-1 % of processing power).
Ideally, there should also be one container running on the node where
MRAppMaster is running.
So can anyone please help me regarding this.
I have a simple job workflow which executes a mapreduce job as a shell action. After submitting the job, its status becomes Running and it stays there but never ends. The mapreduce cluster shows that there are two jobs running, one belongs to the shell application launcher and one for the actual mapreduce job. However the one for the mapreduce job is shown as UNASSIGNED and the progress is zero (which means it has been started yet).
Interestingly when I kill the oozie job, the mapreduce job actually starts running and completes successfully. It looks like the shell launcher is blocking it.
p.s. It is a simple workflow and there is no start or end date that may cause it wait.
Please consider the below case as per you memory resource
Number of container are dependent on the number of blocksize. if you have 2 GB data of 512 mb block size, Yarn creates 4 maps and 1 reduce. While running the mapreduce we should follow some rules to submit the mapreduce job.(this should be applicable for small cluster)
You should configure the below property asper you RAM DISK and CORES.
<property>
<description>The minimum allocation for every container request at the RM,
in MBs. Memory requests lower than this won't take effect,
and the specified value will get allocated at minimum.</description>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>512</value>
</property>
<property>
<description>The maximum allocation for every container request at the RM,
in MBs. Memory requests higher than this won't take effect,
and will get capped to this value.</description>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>2048</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>2048</value>
</property>
And set the Java heap size as per the Memory Resources. Once ensured with the above property in yarn-site.xml according the mapreduce will succeed efficiently.
When a job stuck in "UNASSIGNED" state, it usually means resource manager(RM) can't allocate container to the job.
Check the capacity configure for the user and queue. Giving them more capacity should help.
With Hadoop 2.7 and capacity scheduler, specifically, the following properties need to be examined:
yarn.scheduler.capacity.<queue-path>.capacity
yarn.scheduler.capacity.<queue-path>.user-limit-factor
yarn.scheduler.capacity.maximum-applications
/ yarn.scheduler.capacity.<queue-path>.maximum-applications
yarn.scheduler.capacity.maximum-am-resource-percent
/ yarn.scheduler.capacity.<queue-path>.maximum-am-resource-percent
see more details on those properties at
Hadoop: Capacity Scheduler - Queue Properties
I have an Hadoop 2.2 cluster deployed on a small number of powerful machines. I have a constraint to use YARN as the framework, which I am not very familiar with.
How do I control the number of actual map and reduce tasks that will run in parallel? Each machine has many CPU cores (12-32) and enough RAM. I want to utilize them maximally.
How can I monitor that my settings actually led to a better utilization of the machine? Where can I check how many cores (threads, processes) were used during a given job?
Thanks in advance for helping me melt these machines :)
1.
In MR1, the mapred.tasktracker.map.tasks.maximum and mapred.tasktracker.reduce.tasks.maximum properties dictated how many map and reduce slots each TaskTracker had.
These properties no longer exist in YARN. Instead, YARN uses yarn.nodemanager.resource.memory-mb and yarn.nodemanager.resource.cpu-vcores, which control the amount of memory and CPU on each node, both available to both maps and reduces
Essentially:
YARN has no TaskTrackers, but just generic NodeManagers. Hence, there's no more Map slots and Reduce slots separation. Everything depends on the amount of memory in use/demanded
2.
Using the web UI you can get lot of monitoring/admin kind of info:
NameNode - http://:50070/
Resource Manager - http://:8088/
In addition Apache Ambari is meant for this:
http://ambari.apache.org/
And Hue for interfacing with the Hadoop/YARN cluster in many ways:
http://gethue.com/
There is a good guide on YARN configuration from Hortonworks
You may analyze your job in Job History server. It usually may be found on port 19888. Ambari and Ganglia are also very good for cluster utilization measurement.
I've the same problem,
in order to increase the number of mappers, it's recommended to reduce the size of the input split (each input split is processed by a mapper and so a container). I don't know how to do it,
indeed, hadoop 2.2 /yarn does not take into account none of the following settings
<property>
<name>mapreduce.input.fileinputformat.split.minsize</name>
<value>1</value>
</property>
<property>
<name>mapreduce.input.fileinputformat.split.maxsize</name>
<value>16777216</value>
</property>
<property>
<name>mapred.min.split.size</name>
<value>1</value>
</property>
<property>
<name>mapred.max.split.size</name>
<value>16777216</value>
</property>
best
I am running some map reduce tasks on hadoop. The mapper is used to generate data and hence does not depend upon the hdfs block placement. To test my system I am using 2 nodes and one master node. I am doing my testing on hadoop-2.0 with yarn.
There is something very uncomfortable that I find with hadoop. I have configured it to run 8 maps tasks. Unfortunately hadoop is launching all the 8 map tasks on one node, and the other node is almost ideal. There are 4 reducers, and it does not balance these reducers too. It really results in a poor performance when that happens.
I have these properties set in mapred-site.xml in both the job tracker and task tracker
<property>
<name>mapreduce.tasktracker.map.tasks.maximum</name>
<value>2</value>
</property>
<property>
<name>mapreduce.tasktracker.reduce.tasks.maximum</name>
<value>2</value>
</property>
Can some one explain if this problem can be solved or why does such problem exists with hadoop?
Don't think of mappers/reducers as one to one with servers. What it sounds like is happening is your system knows that the load is so low their is no need to launch reducers across the cluster. It is trying to avoid the network overhead of transfering files from master to the slave nodes.
Think of the number of mappers and reducers as how many concurrent threads you will allow your cluster to run. This is important when determing how much memory to allocate for each mapper/reducer.
To force an even distrubtion you could try allocating enough memory for each mapper/reducer to make it require a whole node. For example, 4 nodes, 8 mappers. Force each mapper to have 50% of the ram on each node. Not sure if this will work as expected, but really Hadoop load balancing itself is something good in theory, but might not seem that way for small data situations.