Recently I was trying to understand the working of Mumak (see, e.g., MAPREDUCE-728)
It basically takes a job trace and topology trace and simulates hadoop.
I couldn't understand how it assigns splits across nodes.
What does mumak mean by local map task and non-local task?
In MapReduce there is the notion of "locality" which signifies how "far away" a task is running from the data it is working on. The best locality is running a task on a node that contains the data it needs. The second best locality is a node in the same rack as a node containing the data, etc...
Mumak has the ability to slow-down the tasks scheduled on non-local nodes by using the following settings in your configuration file:
<property>
<name>mumak.scale.racklocal</name>
<value>1.5</value>
<description>Scaling factor for task attempt runtime of rack-local over
node-local</description>
</property>
<property>
<name>mumak.scale.rackremote</name>
<value>1.8</value>
<description>Scaling factor for task attempt runtime of rack-remote over
node-local</description>
</property>
Related
I have a very small 2 node Hadoop-HBase cluster. I am executing MapReduce jobs on it. I use Hadoop-2.5.2. I have 32GB(nodes have 64GB memory each) free for MapReduce in each node with the configuration in yarn site as follows
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>32768</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>15</value>
</property>
My resource requirements are 2GB for each mapper/reducer that gets executed. I have configured this in the mapred-site.xml Given these configurations, with a total of about 64GB in memory and 30 vcores, I see about 31 mappers or 31 reducers getting executed in parallel.
While all this is fine, there is one part that I am trying to figure out. The number of mappers or reducers executing in parallel, is not the same on both nodes, one of the nodes has higher number of tasks than the other. Why does this happen? Can this be controlled? If so, how?
I suppose YARN does not see this as resources of a node rather resources of a cluster and spawns the tasks wherever it can in the cluster. Is this understanding correct? If not, what is the correct explanation to the said behaviour during a MR execution?
I'm planning a data processing pipeline. My scenario is this:
A user uploads data to a server
This data should be distributed to one (and only one) node in my cluster. There is no distributed computing, just picking a node which has currently the least to do
The data processing pipeline gets data from some kind of distributed job engine. Though here is (finally) my question: many job engines rely on HDFS to work on the data. But since this data is processed on one node only, I'd rather like to avoid to distribute it. But my understanding is that HDFS keeps the data redundant - though I could not find any info if this means whether all data on HDFS is available on all nodes, or the data is mostly on the node where it is processed (locality).
It would be a concern to me due to IO reasons for my usage scenario if data on HDFS would completely redundant.
You can go with Hadoop (Map Reduce + HDFS) to solve your problem.
You can tell HDFS to store specific number of copies as you want. See below dfs.replication property. Set this value to 1 if you want only one copy.
conf/hdfs-site.xml - On master and all slave machines
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
Not necessary that HDFS copy data on each and every node. More info.
Hadoop is work on principle that 'Move code to Data'. Since moving code (mostly in MB's) demands very less network bandwidth than moving data in GB's or TB's, you no need to worry about data locality or network bandwidth. Hadoop take cares of it.
I have a simple job workflow which executes a mapreduce job as a shell action. After submitting the job, its status becomes Running and it stays there but never ends. The mapreduce cluster shows that there are two jobs running, one belongs to the shell application launcher and one for the actual mapreduce job. However the one for the mapreduce job is shown as UNASSIGNED and the progress is zero (which means it has been started yet).
Interestingly when I kill the oozie job, the mapreduce job actually starts running and completes successfully. It looks like the shell launcher is blocking it.
p.s. It is a simple workflow and there is no start or end date that may cause it wait.
Please consider the below case as per you memory resource
Number of container are dependent on the number of blocksize. if you have 2 GB data of 512 mb block size, Yarn creates 4 maps and 1 reduce. While running the mapreduce we should follow some rules to submit the mapreduce job.(this should be applicable for small cluster)
You should configure the below property asper you RAM DISK and CORES.
<property>
<description>The minimum allocation for every container request at the RM,
in MBs. Memory requests lower than this won't take effect,
and the specified value will get allocated at minimum.</description>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>512</value>
</property>
<property>
<description>The maximum allocation for every container request at the RM,
in MBs. Memory requests higher than this won't take effect,
and will get capped to this value.</description>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>2048</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>2048</value>
</property>
And set the Java heap size as per the Memory Resources. Once ensured with the above property in yarn-site.xml according the mapreduce will succeed efficiently.
When a job stuck in "UNASSIGNED" state, it usually means resource manager(RM) can't allocate container to the job.
Check the capacity configure for the user and queue. Giving them more capacity should help.
With Hadoop 2.7 and capacity scheduler, specifically, the following properties need to be examined:
yarn.scheduler.capacity.<queue-path>.capacity
yarn.scheduler.capacity.<queue-path>.user-limit-factor
yarn.scheduler.capacity.maximum-applications
/ yarn.scheduler.capacity.<queue-path>.maximum-applications
yarn.scheduler.capacity.maximum-am-resource-percent
/ yarn.scheduler.capacity.<queue-path>.maximum-am-resource-percent
see more details on those properties at
Hadoop: Capacity Scheduler - Queue Properties
I have an Hadoop 2.2 cluster deployed on a small number of powerful machines. I have a constraint to use YARN as the framework, which I am not very familiar with.
How do I control the number of actual map and reduce tasks that will run in parallel? Each machine has many CPU cores (12-32) and enough RAM. I want to utilize them maximally.
How can I monitor that my settings actually led to a better utilization of the machine? Where can I check how many cores (threads, processes) were used during a given job?
Thanks in advance for helping me melt these machines :)
1.
In MR1, the mapred.tasktracker.map.tasks.maximum and mapred.tasktracker.reduce.tasks.maximum properties dictated how many map and reduce slots each TaskTracker had.
These properties no longer exist in YARN. Instead, YARN uses yarn.nodemanager.resource.memory-mb and yarn.nodemanager.resource.cpu-vcores, which control the amount of memory and CPU on each node, both available to both maps and reduces
Essentially:
YARN has no TaskTrackers, but just generic NodeManagers. Hence, there's no more Map slots and Reduce slots separation. Everything depends on the amount of memory in use/demanded
2.
Using the web UI you can get lot of monitoring/admin kind of info:
NameNode - http://:50070/
Resource Manager - http://:8088/
In addition Apache Ambari is meant for this:
http://ambari.apache.org/
And Hue for interfacing with the Hadoop/YARN cluster in many ways:
http://gethue.com/
There is a good guide on YARN configuration from Hortonworks
You may analyze your job in Job History server. It usually may be found on port 19888. Ambari and Ganglia are also very good for cluster utilization measurement.
I've the same problem,
in order to increase the number of mappers, it's recommended to reduce the size of the input split (each input split is processed by a mapper and so a container). I don't know how to do it,
indeed, hadoop 2.2 /yarn does not take into account none of the following settings
<property>
<name>mapreduce.input.fileinputformat.split.minsize</name>
<value>1</value>
</property>
<property>
<name>mapreduce.input.fileinputformat.split.maxsize</name>
<value>16777216</value>
</property>
<property>
<name>mapred.min.split.size</name>
<value>1</value>
</property>
<property>
<name>mapred.max.split.size</name>
<value>16777216</value>
</property>
best
I am running some map reduce tasks on hadoop. The mapper is used to generate data and hence does not depend upon the hdfs block placement. To test my system I am using 2 nodes and one master node. I am doing my testing on hadoop-2.0 with yarn.
There is something very uncomfortable that I find with hadoop. I have configured it to run 8 maps tasks. Unfortunately hadoop is launching all the 8 map tasks on one node, and the other node is almost ideal. There are 4 reducers, and it does not balance these reducers too. It really results in a poor performance when that happens.
I have these properties set in mapred-site.xml in both the job tracker and task tracker
<property>
<name>mapreduce.tasktracker.map.tasks.maximum</name>
<value>2</value>
</property>
<property>
<name>mapreduce.tasktracker.reduce.tasks.maximum</name>
<value>2</value>
</property>
Can some one explain if this problem can be solved or why does such problem exists with hadoop?
Don't think of mappers/reducers as one to one with servers. What it sounds like is happening is your system knows that the load is so low their is no need to launch reducers across the cluster. It is trying to avoid the network overhead of transfering files from master to the slave nodes.
Think of the number of mappers and reducers as how many concurrent threads you will allow your cluster to run. This is important when determing how much memory to allocate for each mapper/reducer.
To force an even distrubtion you could try allocating enough memory for each mapper/reducer to make it require a whole node. For example, 4 nodes, 8 mappers. Force each mapper to have 50% of the ram on each node. Not sure if this will work as expected, but really Hadoop load balancing itself is something good in theory, but might not seem that way for small data situations.