I am newbie to Hadoop. I have successfully configured a hadoop setup in pseudo distributed mode. Now I would like to know what's the logic of choosing the number of map and reduce tasks. What do we refer to?
Thanks
You cannot generalize how number of mappers/reducers are to be set.
Number of Mappers:
You cannot set number of mappers explicitly to a certain number(There are parameters to set this but it doesn't come into effect). This is decided by the number of Input Splits created by hadoop for your given set of input. You may control this by setting mapred.min.split.size parameter. For more read the InputSplit section here. If you have a lot of mappers being generated due to huge amount of small files and you want to reduce number of mappers then you will need to combine data from more than one files. Read this: How to combine input files to get to a single mapper and control number of mappers.
To quote from the wiki page:
The number of maps is usually driven by the number of DFS blocks in
the input files. Although that causes people to adjust their DFS block
size to adjust the number of maps. The right level of parallelism for
maps seems to be around 10-100 maps/node, although we have taken it up
to 300 or so for very cpu-light map tasks. Task setup takes awhile, so
it is best if the maps take at least a minute to execute.
Actually controlling the number of maps is subtle. The
mapred.map.tasks parameter is just a hint to the InputFormat for the
number of maps. The default InputFormat behavior is to split the total
number of bytes into the right number of fragments. However, in the
default case the DFS block size of the input files is treated as an
upper bound for input splits. A lower bound on the split size can be
set via mapred.min.split.size. Thus, if you expect 10TB of input data
and have 128MB DFS blocks, you'll end up with 82k maps, unless your
mapred.map.tasks is even larger. Ultimately the InputFormat determines
the number of maps.
The number of map tasks can also be increased manually using the
JobConf's conf.setNumMapTasks(int num). This can be used to increase
the number of map tasks, but will not set the number below that which
Hadoop determines via splitting the input data.
Number of Reducers:
You can explicitly set the number of reducers. Just set the parameter mapred.reduce.tasks. There are guidelines for setting this number, but usually the default number of reducers should be good enough. At times a single report file is required, in those cases you might want number of reducers to be set to be 1.
Again to quote from wiki:
The right number of reduces seems to be 0.95 or 1.75 * (nodes *
mapred.tasktracker.tasks.maximum). At 0.95 all of the reduces can
launch immediately and start transfering map outputs as the maps
finish. At 1.75 the faster nodes will finish their first round of
reduces and launch a second round of reduces doing a much better job
of load balancing.
Currently the number of reduces is limited to roughly 1000 by the
buffer size for the output files (io.buffer.size * 2 * numReduces <<
heapSize). This will be fixed at some point, but until it is it
provides a pretty firm upper bound.
The number of reduces also controls the number of output files in the
output directory, but usually that is not important because the next
map/reduce step will split them into even smaller splits for the maps.
The number of reduce tasks can also be increased in the same way as
the map tasks, via JobConf's conf.setNumReduceTasks(int num).
Actually no. of mappers is primarily governed by the no. of InputSplits created by the InputFormat you are using and the no. of reducers by the no. of partitions you get after the map phase. Having said that, you should also keep the no of slots, available per slave, in mind, along with the available memory. But as a rule of thumb you could use this approach :
Take the no. of virtual CPUs*.75 and that's the no. of slots you can configure. For example, if you have 12 physical cores (or 24 virtual cores), you would have (24*.75)=18 slots. Now, based on your requirement you could choose how many mappers and reducers you want to use. With 18 MR slots, you could have 9 mappers and 9 reducers or 12 mappers and 9 reducers or whatever you think is OK with you.
HTH
Related
The default value for the number reducers is 1.Partitioner makes sure that same keys from multiple mappers goes to the same reducer but that does not mean the number of reducers will be equal to the number of partitions. From the driver one can specify the number of reducers using JobConf's conf.setNumReduceTasks(int num) or as mapred.reduce.tasks in the command line. If only the mappers are required then then we can set this as 0.
I have read regarding setting the number of reducers that:
The number of reducers can be between 0.95 or 1.75 multiplied by (no. of nodes) * (no. of maximum containers per node). I also read in the below link that increasing the number for reducers can have overhead:
Number of reducers in hadoop
Also, I read in the below link that the number of reducers is best set to be the number of reduce slots in the cluster (minus a few to allow for failures):
What determines the number of mappers/reducers to use given a specified set of data
Based on the range specified in 1 and based on 2, how to decide on the optimal number for fastest processing?
Thanks.
I want to know the approach for this in general.
This question can only have an empirical answer. Quoting from an answer in this Q&A
By default on 1 GB of data one reducer would be used.
[...]
Similarly if your data is 10 Gb so 10 reducer would be used .
Defaults already are the rule of thumb. You can further tweak the default number by making empirical tests and see how performances change. That is, currently, all.
I was learning hadoop,
I found number of reducers very confusing :
1) Number of reducers is same as number of partitions.
2) Number of reducers is 0.95 or 1.75 multiplied by (no. of nodes) * (no. of maximum containers per node).
3) Number of reducers is set by mapred.reduce.tasks.
4) Number of reducers is closest to: A multiple of the block size * A task time between 5 and 15 minutes * Creates the fewest files possible.
I am very confused, Do we explicitly set number of reducers or it is done by mapreduce program itself?
How is number of reducers is calculated? Please tell me how to calculate number of reducers.
1 - The number of reducers is as number of partitions - False. A single reducer might work on one or more partitions. But a chosen partition will be fully done on the reducer it is started.
2 - That is just a theoretical number of maximum reducers you can configure for a Hadoop cluster. Which is very much dependent on the kind of data you are processing too (decides how much heavy lifting the reducers are burdened with).
3 - The mapred-site.xml configuration is just a suggestion to the Yarn. But internally the ResourceManager has its own algorithm running, optimizing things on the go. So that value is not really the number of reducer tasks running every time.
4 - This one seems a bit unrealistic. My block size might 128MB and everytime I can't have 128*5 minimum number of reducers. That's again is false, I believe.
There is no fixed number of reducers task that can be configured or calculated. It depends on the moment how much of the resources are actually available to allocate.
Number of reducer is internally calculated from size of the data we are processing if you don't explicitly specify using below API in driver program
job.setNumReduceTasks(x)
By default on 1 GB of data one reducer would be used.
so if you are playing with less than 1 GB of data and you are not specifically setting the number of reducer so 1 reducer would be used .
Similarly if your data is 10 Gb so 10 reducer would be used .
You can change the configuration as well that instead of 1 GB you can specify the bigger size or smaller size.
property in hive for setting size of reducer is :
hive.exec.reducers.bytes.per.reducer
you can view this property by firing set command in hive cli.
Partitioner only decides which data would go to which reducer.
Your job may or may not need reducers, it depends on what are you trying to do. When there are multiple reducers, the map tasks partition their output, each creating one partition for each reduce task. There can be many keys (and their associated values) in each partition, but the records for any given key are all in a single partition. One rule of thumb is to aim for reducers that each run for five minutes or so, and which produce at least one HDFS block’s worth of output. Too many reducers and you end up with lots of small files.
Partitioner makes sure that same keys from multiple mappers goes to the same reducer. This doesn't mean that number of partitions is equal to number of reducers. However, you can specify number of reduce tasks in the driver program using job instance like job.setNumReduceTasks(2). If you don't specify the number of reduce tasks in the driver program then it picks from the mapred.reduce.tasks which has the default value of 1 (https://hadoop.apache.org/docs/r1.0.4/mapred-default.html) i.e. all mappers output will go to the same reducer.
Also, note that programmer will not have control over number of mappers as it depends on the input split where as programmer can control the number of reducers for any job.
In Hadoop, if we have not set number of reducers, then how many number of reducers will be created?
Like number of mappers is dependent on (total data size)/(input split size),
E.g. if data size is 1 TB and input split size is 100 MB. Then number of mappers will be (1000*1000)/100 = 10000(Ten thousand).
The number of reducer is dependent on which factors ? How many reducers are created for a job?
How Many Reduces? ( From official documentation)
The right number of reduces seems to be 0.95 or 1.75 multiplied by
(no. of nodes) * (no. of maximum containers per node).
With 0.95 all of the reduces can launch immediately and start transferring map outputs as the maps finish. With 1.75 the faster nodes will finish their first round of reduces and launch a second wave of reduces doing a much better job of load balancing.
Increasing the number of reduces increases the framework overhead, but increases load balancing and lowers the cost of failures.
The scaling factors above are slightly less than whole numbers to reserve a few reduce slots in the framework for speculative-tasks and failed tasks.
This article covers about Mapper count too.
How Many Maps?
The number of maps is usually driven by the total size of the inputs, that is, the total number of blocks of the input files.
The right level of parallelism for maps seems to be around 10-100 maps per-node, although it has been set up to 300 maps for very cpu-light map tasks. Task setup takes a while, so it is best if the maps take at least a minute to execute.
Thus, if you expect 10TB of input data and have a blocksize of 128MB, you’ll end up with 82,000 maps, unless Configuration.set(MRJobConfig.NUM_MAPS, int) (which only provides a hint to the framework) is used to set it even higher.
If you want to change the default value of 1 for number of reducers, you can set below property (From hadoop 2.x version) as a command line parameter
mapreduce.job.reduces
OR
you can set programmatically with
job.setNumReduceTasks(integer_numer);
Have a look at one more related SE question: What is Ideal number of reducers on Hadoop?
By default the no of reducers is set to 1.
You can change it by adding a parameter
mapred.reduce.tasks in the command line or in the Driver code or in the conf file that you pass.
e.g: Command Line Argument: bin/hadoop jar ... -Dmapred.reduce.tasks=<num reduce tasks>
or, in Driver code as: conf.setNumReduceTasks(int num);
Recommended read:
https://wiki.apache.org/hadoop/HowManyMapsAndReduces
I explicitly specify the number of mappers within my java program using conf.setNumMapTasks(), but when the job ends, the counter shows that the number of launched map tasks were more than the specified value. How to limit the number of mapper to the specified value?
According to the Hadoop API Jonf.setNumMapTasks is just a hint to the Hadoop runtime. The total number of map tasks equals to the number of blocks in the input data to be processed.
Although, it should be possible to configure the number of map/reduce slots per node by using the mapred.tasktracker.map.tasks.maximum and the mapred.tasktracker.reduce.tasks.maximum in mapred-site.xml. This way it's possible to configure the total number of mappers/reducers executing in parallel across the entire cluster.
Using conf.setNumMapTasks(int num) the number of mappers can be increased but cannot be reduced.
You cannot set number of mappers explicitly to a certain number which is less than the number of mappers calculated by Hadoop. This is decided by the number of Input Splits created by hadoop for your given set of input. You may control this by setting mapred.min.split.size parameter.
To quote from the wiki page:
The number of maps is usually driven by the number of DFS blocks in
the input files. Although that causes people to adjust their DFS block
size to adjust the number of maps. The right level of parallelism for
maps seems to be around 10-100 maps/node, although we have taken it up
to 300 or so for very cpu-light map tasks. Task setup takes awhile, so
it is best if the maps take at least a minute to execute.
Actually controlling the number of maps is subtle. The
mapred.map.tasks parameter is just a hint to the InputFormat for the
number of maps. The default InputFormat behavior is to split the total
number of bytes into the right number of fragments. However, in the
default case the DFS block size of the input files is treated as an
upper bound for input splits. A lower bound on the split size can be
set via mapred.min.split.size. Thus, if you expect 10TB of input data
and have 128MB DFS blocks, you'll end up with 82k maps, unless your
mapred.map.tasks is even larger. Ultimately the InputFormat determines
the number of maps.
The number of map tasks can also be increased manually using the
JobConf's conf.setNumMapTasks(int num). This can be used to increase
the number of map tasks, but will not set the number below that which
Hadoop determines via splitting the input data.
Quoting the javadoc of JobConf#setNumMapTasks():
Note: This is only a hint to the framework. The actual number of
spawned map tasks depends on the number of InputSplits generated by
the job's InputFormat.getSplits(JobConf, int). A custom InputFormat is
typically used to accurately control the number of map tasks for the
job.
Hadoop also relaunches failed or long running map tasks in order to provide high availability.
You can limit the number of map tasks concurrently running on a single node. And you could limit the number of launched tasks provided that you have big input files. You would have to write an own InputFormat class, which is not splitable. Then Hadoop will run a map task for every input file, that you have.
According to [Partitioning your job into maps and reduces], follows:
The mapred.map.tasks parameter is just a hint to the InputFormat for the number of maps. The default InputFormat behavior is to split the total number of bytes into the right number of fragments. However, in the default case the DFS block size of the input files is treated as an upper bound for input splits. A lower bound on the split size can be set via mapred.min.split.size. Thus, if you expect 10TB of input data and have 128MB DFS blocks, you'll end up with 82k maps, unless your mapred.map.tasks is even larger. Ultimately the InputFormat determines the number of maps.
However, you can learn more about InputFormat .
If I don't specify the number of mappers, how would the number be determined? Is there a default setting read from a configuration file (such as mapred-site.xml)?
Adding more to what Chris added above:
The number of maps is usually driven by the number of DFS blocks in the input files. Although that causes people to adjust their DFS block size to adjust the number of maps.
The right level of parallelism for maps seems to be around 10-100 maps/node, although this can go upto 300 or so for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute.
You can increased number of Map task by modifying JobConf's conf.setNumMapTasks(int num). Note: This could increase the number of map tasks, but will not set the number below that which Hadoop determines via splitting the input data.
Finally controlling the number of maps is subtle. The mapred.map.tasks parameter is just a hint to the InputFormat for the number of maps. The default InputFormat behavior is to split the total number of bytes into the right number of fragments. However, in the default case the DFS block size of the input files is treated as an upper bound for input splits.
A lower bound on the split size can be set via mapred.min.split.size.
Thus, if you expect 10TB of input data and have 128MB DFS blocks, you'll end up with 82k maps, unless your mapred.map.tasks is even larger. Ultimately the InputFormat determines the number of maps.
Read more: http://wiki.apache.org/hadoop/HowManyMapsAndReduces
It depends on a number of factors:
Input format and particular configuration properties for the format
for file based input formats (TextInputFormat, SequenceFileInputFormat etc):
Number of input files / paths
are the files splittable (typically compressed files are not, SequenceFiles are an exception to this)
block size of the files
There are probably more, but you hopefully get the idea