Hadoop Number of Reducers Configuration Options Priority - hadoop

What are the priorities of the following 3 options for setting number of reduces? In other words, if all three are set, which one will be taken into account?
Option1:
setNumReduceTasks(2) within the application code
Option2:
-D mapreduce.job.reduces=2 as command line argument
Option3:
through $HADOOP_CONF_DIR/mapred-site.xml file
<property>
<name>mapreduce.job.reduces</name>
<value>2</value>
</property>

According to the Hadoop - The Definitive Guide
The -D option is used to set the configuration property with key color to the value
yellow. Options specified with -D take priority over properties from the configuration
files. This is very useful because you can put defaults into configuration files and then
override them with the -D option as needed. A common example of this is setting the
number of reducers for a MapReduce job via -D mapred.reduce.tasks=n. This will
override the number of reducers set on the cluster or set in any client-side configuration
files.

You have them racked in priority order - option 1 will override 2, and 2 will override 3. In other words Option 1 will be the one used by your job in this scenario

First Priority: Passing configuration parameters through command line (while submitting MR Application)
Second Priority: Setting configuration parameters in application code
Third Priority: It will read default parameters from multiple xml files such as core-site.xml, hadoop-env.sh, hdfs-site.xml, log4j.properties and mapred-site.xml

Related

Setting dfs.blocksize to 100Kb in Hadoop

I try to set the dfs.blocksize in Hadoop to 100Kb which is less than the default dfs.namenode.fs-limits.min-block-size, which is 1MB.
When I copy the file like
hdfs dfs -Ddfs.namenode.fs-limits.min-block-size=0 -Ddfs.blocksize=102400 inp.txt /input/inp.txt
I still get,
copyFromLocal: Specified block size is less than configured minimum value (dfs.namenode.fs-limits.min-block-size): 102400 < 1048576
I tried to add this property in hdfs-site.xml as well. But dfs.namenode.fs-limits.min-block-size does not seem to change.
How else would I change this property?
Try changing the value of the dfs.namenode.fs-limits.min-block-size property in the /etc/hadoop/conf/hdfs-site.xml file and restarting the NameNode, as this may be a final property which cannot be overridden by a command line setting.

increasing the hadoop task JVM size

I want to set the task JVM size (Map task and Reduce task) , it can be done using the the property mapred.child.java.opts . But my concern is ,where do i need to set it . Can i set it using -D option while submitting job or I need to set this propery in each node's mapred-site.xml .
Thanks,
Priyaranjan
You can use
-Dmapred.child.java.opts='-Xmx1024m'
on the command line to set the tasks memory to 1024 mib.
Similarly in Java code of the job, you can set it as a configuration parameter:
conf.set("mapred.child.java.opts", "-Xmx1024m");
Like this:
hadoop jar your.jar package.MainClass -Dmapred.child.java.opts=blar some more args

Automatically set maximum number of map tasks per node to the number of cores?

I'm working on setting up a hadoop cluster where the nodes are all fairly heterogeneous, i.e. they each have a different number of cores. Currently I have to manually edit the mapred-site.xml on each node to fill in {cores}:
<property>
<name>mapred.tasktracker.map.tasks.maximum</name>
<value>{cores}</value>
</property>
Is there an easier way to to this when I add new nodes? Most of the other values are some default and the maximum map tasks is the only thing that changes from node to node.
If you're comfortable with some scripting then the following will give you the number of 'processors' for each machine (which mean different things to different architectures but is more or less what you want):
cat /proc/cpuinfo | grep processor | wc -l
Then you can use sed or some equivalent to update your mapred-site.xml file according to the output of this.
So putting this all together:
CORES=`cat /proc/cpuinfo | grep processor | wc -l`
sed -i "s/{cores}/$CORES/g" mapred-site.xml
A footnote, but you probably don't want to configure the number of mappers and the number of reducers each to the number of cores, more so that you probably want to split them between the two types, and have a core spare for data node and task tracker etc.

Can we set the multiple generic arguments with -D option in GenericOptionsParser?

I want to pass multiple configuration parameters to my Hadoop job through GenericOptionsParser.
With "-D abc=xyz" I can pass one argument and able to retrieve the same from the configuration object but I am not able to pass the multiple argument.
Is it possible to pass multiple argument?If yes how?
Passed the parameters as -D color=yellow -D number=10
Had the following code in the run() method
String color = getConf().get("color");
System.out.println("color = " + color);
String number = getConf().get("number");
System.out.println("number = " + number);
The following was the o/p in the console
color = yellow
number = 10
I recently ran in to this issue after upgrading from Hadoop 1.2.1 to Hadoop 2.4.1. The problem is that Hadoop's dependency on commons-cli 1.2 was being omitted due to a conflict with commons-cli 1.1 that was pulled in from Cassandra 2.0.5.
After a quick look through the source it looks like commons-cli options that have an uninitialized number of values (what Hadoop's GenericOptionsParser does) default to a limit of 1 in version 1.1 and no limit in 1.2.
I hope that helps!
I tested passing multiple parameters and I used the -D flag multiple times.
$HADOOP_HOME/bin/hadoop jar /path/to/my.jar -D mapred.heartbeats.in.second=80 -D mapred.map.max.attempts=2 ...`
Doing this changed the values to what I specified in the Job's configuration.

Change block size of dfs file

My map is currently inefficient when parsing one particular set of files (a total of 2 TB). I'd like to change the block size of files in the Hadoop dfs (from 64MB to 128 MB). I can't find how to do it in the documentation for only one set of files and not the entire cluster.
Which command changes the block size when I upload? (Such as copying from local to dfs.)
For me, I had to slightly change Bkkbrad's answer to get it to work with my setup, in case anyone else finds this question later on. I've got Hadoop 0.20 running on Ubuntu 10.10:
hadoop fs -D dfs.block.size=134217728 -put local_name remote_location
The setting for me is not fs.local.block.size but rather dfs.block.size
I change my answer! You just need to set the fs.local.block.size configuration setting appropriately when you use the command line.
hadoop fs -D fs.local.block.size=134217728 -put local_name remote_location
Original Answer
You can programatically specify the block size when you create a file with the Hadoop API. Unfortunately, you can't do this on the command line with the hadoop fs -put command. To do what you want, you'll have to write your own code to copy the local file to a remote location; it's not hard, just open a FileInputStream for the local file, create the remote OutputStream with FileSystem.create, and then use something like IOUtils.copy from Apache Commons IO to copy between the two streams.
In conf/ folder we can change the value of dfs.block.size in configuration file hdfs-site.xml.
In hadoop version 1.0 default size is 64MB and in version 2.0 default size is 128MB.
<property>
<name>dfs.block.size<name>
<value>134217728<value>
<description>Block size<description>
<property>
you can also modify your block size in your programs like this
Configuration conf = new Configuration() ;
conf.set( "dfs.block.size", 128*1024*1024) ;
We can change the block size using the property named dfs.block.size in the hdfs-site.xml file.
Note:
We should mention the size in bits.
For example :
134217728 bits = 128 MB.

Resources