Problems with memory kill limits for YARN - hadoop

I have problem with understanding YARN configuration.
I have such lines in yarn/mapreduce configs:
<name>mapreduce.map.memory.mb</name>
<value>2048</value>
<name>mapreduce.reduce.memory.mb</name>
<value>1024</value>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>2.1</value>
Here is written:
By default ("yarn.nodemanager.vmem-pmem-ratio") is set to 2.1. This means that a map or reduce container can allocate up to 2.1 times the ("mapreduce.reduce.memory.mb") or ("mapreduce.map.memory.mb") of virtual memory before the NM will kill the container.
When NodeManager will kill my container?
When a whole container reaches 2048MB*2.1=4300,8MB? Or 1024MB*2.1=2150,4MB
Can i get some better explanation?

Each Mapper and Reducer runs in its own separate container (containers are not shared between Mappers and Reducers, unless it is a Uber job. Check about Uber mode here: What is the purpose of "uber mode" in hadoop?).
Typically, memory requirements for a Mapper and a Reducer differ.
Hence, there are different configuration parameters for Mapper (mapreduce.map.memory.mb) and Reducer (mapreduce.reduce.memory.mb).
So, as per the settings in your yarn-site.xml, virtual memory limits for Mapper and Redcuer are:
Mapper limit: 2048 * 2.1 = 4300.8 MB
Reducer limit: 1024 * 2.1 = 2150.4 MB
In short, Mappers and Reducers have different memory settings and limits.

Related

Apache Nutch 2.3.1, increase reducer memory

I have setup a small size cluster if Hadoop with Hbase for Nutch 2.3.1. The hadoop version is 2.7.7 and Hbase is 0.98. I have customized a hadoop job and now I have to set memory for reducer task in driver class. I have come to know, in simple hadoop MR jobs, you can use JobConf method setMemoryForReducer. But there isn't any option available in Nutch. In my case , currently, reducer memory is set to 4 GB via mapred-site.xml (Hadoop configuration). But for Nutch, I have to double it.
Is it possible without changing hadoop conf files, either via driver class or nutch-site.xml
Finally, I was able to found the solution. NutchJob does the objective. Following is the code snippet
NutchJob job = NutchJob.getInstance(getConf(), "rankDomain-update");
int reducer_mem = 8192;
String memory = "-Xmx" + (int) (reducer_mem * 0.8)+ "m";
job.getConfiguration().setInt("mapreduce.reduce.memory.mb", reducer_mem);
job.getConfiguration().set("mapreduce.reduce.java.opts", memory );
// rest of code below

How to make Hadoop/EMR use more containers per node

I'm in the process of moving our application from Hadoop 1.0.3 to 2.7, on EMR v5.1.0. I got it running, but I'm still having problems getting my head around the resource-allocation system in Yarn. With the default settings provided by EMR, Hadoop only allocates one container per node, even if I select a larger instance type for the nodes. This is a problem, since we'll now be using twice as many nodes to do the same amount of work.
I want to squeeze more containers into one node, and ensure that we're using all the available resources. I assume that I shouldn't touch yarn.nodemanager.resource.memory-mb or yarn.nodemanager.resource.cpu-vcores, since those are set by EMR to reflect the actual available resources. Which settings do I have to change?
Your container sizes are defined by setting the memory (default criteria for a container) and vcores. The following can be configured:
yarn-scheduler.minimum-allocation-mb
yarn-scheduler.maximum-allocation-mb
yarn-scheduler.increment-allocation-mb
yarn-scheduler.minimum-allocation-vcores
yarn-scheduler.maximum-allocation-vcores
yarn-scheduler.increment-allocation-vcores
All the following criteria must be satified (they are per container, except for yarn.nodemanager.resource.cpu-vcores and yarn.nodemanager.resource.memory-mb which are per NodeManager hence per DataNode):
1 <= yarn-scheduler.minimum-allocation-vcores <= yarn-scheduler.maximum-allocation-vcores
yarn-scheduler.maximum-allocation-vcores <= yarn.nodemanager.resource.cpu-vcores
yarn-scheduler.increment-allocation-vcores = 1
1024 <= yarn-scheduler.minimum-allocation-mb <= yarn-scheduler.maximum-allocation-mb
yarn-scheduler.maximum-allocation-mb <= yarn.nodemanager.resource.memory-mb
yarn-scheduler.increment-allocation-mb = 512
You can also see this helpful link https://www.cloudera.com/documentation/enterprise/5-4-x/topics/cdh_ig_yarn_tuning.html

Hadoop2.4.0 creating 39063 Map tasks to process 10MB file in Local mode with invalid Inputsplit configuration

am using hadoop-2.4.0 with all default configuration expect below:
FileInputFormat.setInputPaths(job, new Path("in")); //10mb file; just one file.
FileOutputFormat.setOutputPath(job, new Path("out"));
job.getConfiguration().set("mapred.max.split.size", "64");
job.getConfiguration().set("mapred.min.split.size", "128");
PS: I set max split size is lesser than min(Initially I set by mistake and I realized)
And, as per inputsplit calucaiton logic
max(minimumSize, min(maximumSize, blockSize))
max(128,min(64,128) --> 128MB and it is great than file size, so it should create only one inputsplit(one mapper)
Am just curious about how the framework calculating 39063 mappers each time when I run this program in eclipse?
Logs:
2015-07-15 12:02:37 DEBUG LocalJobRunner Starting mapper thread pool executor.
2015-07-15 12:02:37 DEBUG LocalJobRunner Max local threads: 1
2015-07-15 12:02:37 DEBUG LocalJobRunner Map tasks to process: 39063
2015-07-15 12:02:38 INFO LocalJobRunner Starting task:
attempt_local192734774_0001_m_000000_0
Thanks,
In your code you have specified:
job.getConfiguration().set("mapred.max.split.size", "64");
job.getConfiguration().set("mapred.min.split.size", "128");
Its calculating into bytes. Hence you are getting high number of Mapper.
I think you should use something like this:
job.getConfiguration().set("mapred.min.split.size", 67108864);
67108864 is value in bytes of 64MB
Calculation: 64*1024*1024 = 67108864
mapred.max.split.size is basicall used to combine small file to defint split size where you are dealing with large number of small files and mapred.min.split.size is used to define split where you are dealing with large files.
If you are using YARN or MR2 then you should use mapreduce.input.fileinputformat.split.minsize

Hadoop performance modeling

I am working on Hadoop performance modeling. Hadoop has 200+ parameters so setting them manually is not possible. So often we run our hadoop jobs with default parameter value(like using default value io.sort.mb, io.sort.record.percent, mapred.output.compress etc). But using default parameter value gives us sub optimal performance. There is some work done in this area by Herodotos Herodotou (http://www.cs.duke.edu/starfish/files/vldb11-job-optimization.pdf) to improve performance. But i have following doubt in their work --
They are fixing the value of parameters at the job start time( according to proportionality assumption of data) for all the phases( read, map, collect etc.) of MapReduce job. Can we set different value of these parameters for each phase at run time according to run time environment( like cluster configuration, underling file system etc.), by changing Hadoop configuration log files of a particular node to get optimal performance from a node ?
They are using white box model for Hadoop core are they still applicable for
current Hadoop ( http://arxiv.org/pdf/1106.0940.pdf) ?
No, you couldn't dynamically change MapReduce parameters per job per node.
Configuring set of nodes
Rather what you could do is change the configuration parameters per node statically in the configuration files (generally located in /etc/hadoop/conf), so that you could take the most out of your cluster with different h/w configurations.
Example: Assume you have 20 worker nodes with different hardware configurations like:
10 with configuration of 128GB RAM, 24 Cores
10 with configuration of 64GB RAM, 12 Cores
In that case you would want to configure each of identical servers to take most out of the hardware for example, you would want to run more child tasks (mappers & reducers) on worker nodes with more RAM and Cores, for example:
Nodes with 128GB RAM, 24 Cores => 36 worker tasks (mappers + reducers), JVM heap for each worker task would be around 3GB.
Nodes with 64GB RAM, 12 Cores => 18 worker tasks (mappers + reducers), JVM heap for each worker task would be around 3GB.
So, you would want to configure the set of nodes respectively with appropriate parameters.
Using ToolRunner to pass configuration parameters dynamically to a Job:
Also, you could dynamically change the MapReduce job parameters per job but these parameters would be applied to the entire cluster not just to a set of nodes. Provided your MapReduce job driver extends ToolRunner.
ToolRunner allows you to parse generic hadoop command line arguments. You'll be able to pass MapReduce configuration parameters using -D property.name=property.value.
You can pretty much pass almost all hadoop parameters dynamically to a job. But most commonly passed MapReduce configuration parameters dynamically to a job are:
mapreduce.task.io.sort.mb
mapreduce.map.speculative
mapreduce.job.reduces
mapreduce.task.io.sort.factor
mapreduce.map.output.compress
mapreduce.map.outout.compress.codec
mapreduce.reduce.memory.mb
mapreduce.map.memory.mb
Here is an example terasort job passing lots of parameters dynamically per job:
hadoop jar hadoop-mapreduce-examples.jar tearsort \
-Ddfs.replication=1 -Dmapreduce.task.io.sort.mb=500 \
-Dmapreduce.map.sort.splill.percent=0.9 \
-Dmapreduce.reduce.shuffle.parallelcopies=10 \
-Dmapreduce.reduce.shuffle.memory.limit.percent=0.1 \
-Dmapreduce.reduce.shuffle.input.buffer.percent=0.95 \
-Dmapreduce.reduce.input.buffer.percent=0.95 \
-Dmapreduce.reduce.shuffle.merge.percent=0.95 \
-Dmapreduce.reduce.merge.inmem.threshold=0 \
-Dmapreduce.job.speculative.speculativecap=0.05 \
-Dmapreduce.map.speculative=false \
-Dmapreduce.map.reduce.speculative=false \

 -Dmapreduce.job.jvm.numtasks=-1 \
-Dmapreduce.job.reduces=84 \

 -Dmapreduce.task.io.sort.factor=100 \
-Dmapreduce.map.output.compress=true \

 -Dmapreduce.map.outout.compress.codec=\
org.apache.hadoop.io.compress.SnappyCodec \
-Dmapreduce.job.reduce.slowstart.completedmaps=0.4 \
-Dmapreduce.reduce.merge.memtomem.enabled=fasle \
-Dmapreduce.reduce.memory.totalbytes=12348030976 \
-Dmapreduce.reduce.memory.mb=12288 \

 -Dmapreduce.reduce.java.opts=“-Xms11776m -Xmx11776m \
-XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode \
-XX:+CMSIncrementalPacing -XX:ParallelGCThreads=4” \

 -Dmapreduce.map.memory.mb=4096 \

 -Dmapreduce.map.java.opts=“-Xmx1356m” \
/terasort-input /terasort-output

yarn is not honouring yarn.nodemanager.resource.cpu-vcores

I am using Hadoop-2.4.0 and my system configs are 24 cores, 96 GB RAM.
I am using following configs
mapreduce.map.cpu.vcores=1
yarn.nodemanager.resource.cpu-vcores=10
yarn.scheduler.minimum-allocation-vcores=1
yarn.scheduler.maximum-allocation-vcores=4
yarn.app.mapreduce.am.resource.cpu-vcores=1
yarn.nodemanager.resource.memory-mb=88064
mapreduce.map.memory.mb=3072
mapreduce.map.java.opts=-Xmx2048m
Capacity Scheduler configs
queue.default.capacity=50
queue.default.maximum_capacity=100
yarn.scheduler.capacity.root.default.user-limit-factor=2
With above configs, I expect yarn won't launch more than 10 mappers per node, but It is launching 28 mappers per node.
Am I doing something wrong??
YARN is running more containers than allocated cores because by default DefaultResourceCalculator is used. It considers only memory.
public int computeAvailableContainers(Resource available, Resource required) {
// Only consider memory
return available.getMemory() / required.getMemory();
}
Use DominantResourceCalculator, It uses both cpu and memory.
Set below config in capacity-scheduler.xml
yarn.scheduler.capacity.resource-calculator=org.apache.hadoop.yarn.util.resource.DominantResourceCalculator
More about DominantResourceCalculator

Resources