Getting node utilization % in YARN (Hadoop 2.6.0) - hadoop

In a YARN 2.6.0 cluster, is there a way to be able to get all the connected node's CPU utilization at the ResourceManager? Also, is the source code modifiable such that we can decide the nodes for a map-reduce job based on the utilization. If yes, where would this change take place?

Pls, find the implementation of Container Monitor:(CPU Utilization)
hadoop-2.6.0-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java
We have methods to check if a container is over the limitation.
isProcessTreeOverLimit will show you how yarn get the memory usage of certain container(process).
hadoop-2.6.0-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ProcfsBasedProcessTree.java
The above file shows you how Yarn gets memory usage: tracking process file in/proc.

Related

How to unzip large xml files into one HDFS directory

I have a requirement to load Zip files from HDFS directory, unzip it and write back to HDFS in a single directory with all unzipped files. The files are XML and the size varies in GB.
Firstly, I approached by implementing the Map-Reduce program by writing a custom InputFormat and Custom RecordReader to unzip the files and provide these contents to mapper, thereafter each mapper process and writes to HDFS using MultiOutput Format. The map reduce job running on YARN.
This approach works fine and able to get files in unzipped format in HDFS, when the input size is in MB's, but when the input size is in GB's, the job is failing to write and ended up with the following error.
17/06/16 03:49:44 INFO mapreduce.Job:  map 94% reduce 0%
17/06/16 03:49:53 INFO mapreduce.Job:  map 100% reduce 0%
17/06/16 03:51:03 INFO mapreduce.Job: Task Id : attempt_1497463655394_61930_m_000001_2, Status : FAILED
Container [pid=28993,containerID=container_e50_1497463655394_61930_01_000048] is running beyond physical memory limits. Current usage: 2.6 GB of 2.5 GB physical memory used; 5.6 GB of 12.5 GB virtual memory used. Killing container.
It is apparent that each unzipped file is processed by one mapper and yarn child container running mapper not able to hold the large file in the memory.
On the other hand, I would to like try on Spark, to unzip the file and write the unzipped files to a single HDFS directory running on YARN, I wonder with spark also, each executor has to process the single file.
I'm looking for the solution to process the files parallelly, but at the end write it to a single directory.
Please let me know this can be possible in Spark, and share me some code snippets.
Any help appreciated.
Actually, the task itself is not failing! YARN is killing the
container (inside map task is running) as that Yarn child using more
memory than requested memory from YARN. As you are planning to do it
in Spark, you can simply increase the memory to MapReduce tasks.
I would recommend you to
Increase YARN child memory as you are handling GBs of data, Some key properties
yarn.nodemanager.resource.memory-mb => Container Memory
yarn.scheduler.maximum-allocation-mb => Container Memory Maximum
mapreduce.map.memory.mb => Map Task Memory (Must be less then yarn.scheduler.maximum-allocation-mb at any pint of time in runtime)
Focus on data processing(Unzip) only for this job, invoke another job/command to merge files.

How to configure Hadoop parameters on Amazon EMR?

I run a MR job with one Master and two slavers on the Amazon EMR, but got lots of the error messages like running beyond physical memory limits. Current usage: 3.0 GB of 3 GB physical memory used; 3.7 GB of 15 GB virtual memory used. Killing container after map 100% reduce 35%
I modified my codes by adding the following lines in the Hadoop 2.6.0 MR configuration, but I still got the same error messages.
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "jobtest2");
//conf.set("mapreduce.input.fileinputformat.split.minsize","3073741824");
conf.set("mapreduce.map.memory.mb", "8192");
conf.set("mapreduce.map.java.opts", "-Xmx8192m");
conf.set("mapreduce.reduce.memory.mb", "8192");
conf.set("mapreduce.reduce.java.opts", "-Xmx8192m");
What is the correct way to configure those parameters(mapreduce.map.memory.mb, mapreduce.map.java.opts, mapreduce.reduce.memory.mb, mapreduce.reduce.java.opts) on Amazon EMR? Thank you!
Hadoop 2.x allows you to set the map and reduce settings per job so you are setting the correct section. The problem is the Java opts Xmx memory must be less than the map/reduce.memory.mb. This property represents the total memory for heap and off heap usage. Take a look at the defaults as an example: http://docs.aws.amazon.com/ElasticMapReduce/latest/ReleaseGuide/emr-hadoop-task-config.html. If Yarn was killing off the containers for exceeding the memory when using the default settings then this means you need to give more memory to the off heap portion, thus increasing the gap between Xmx and the total map/reduce.memory.mb.
Take a look at the documentation for the AWS CLI. There is a section on Hadoop and how to map to specific XML config files on EMR instance creation. I have found this to be the best approach available on EMR.

Hadoop EMR job runs out of memory before RecordReader initialized

I'm trying to figure out what could be causing my emr job to run out of memory before it has even started processing my file inputs. I'm getting a
"java.lang.OutOfMemoryError cannot be cast to java.lang.Exception" error before my RecordReader is even initialized (aka, before it even tried to unzip the files and process them). I am running my job on a directory with a large amount of inputs. I am able to run my job just fine on a smaller input set. Does anyone have any ideas?
I realized that the answer is that there was too much metadata overhead on the master node. The master node must store ~150 kb of data for each file that will be processed. With millions of files, this can be gigabytes of data, which was too much and caused the master node to crash.
Here's a good source for more information: http://www.inquidia.com/news-and-info/working-small-files-hadoop-part-1#sthash.YOtxmQvh.dpuf

Adding new files to a running hadoop cluster

consider that you have 10GB data and you want to process them by a MapReduce program using Hadoop. Instead of copying all the 10GB at the beginning to HDFS and then running the program, I want to for example copy 1GB and start the work and gradually add the remaining 9GB during the time. I wonder if it is possible in Hadoop.
Thanks,
Morteza
Unfortunately this is not possible with MapReduce. When you initiate a MapReduce Job, part of the setup process is determining block locations of your input. If the input is only partially there, the setup process will only work on those blocks and wont dynamically add inputs.
If you are looking for a stream processor, have a look at Apache Storm https://storm.apache.org/ or Apache Spark https://spark.apache.org/

Mesos & Hadoop: How to get the running job input data size?

I'm running Hadoop 1.2.1 on top of Mesos 0.14. My goal is to log the input data size, running time, cpu usage, memory usage, and so on for optimization purposes later. All of these but data size are obtained using Sigar.
Is there any way I can get the input data size of any job which is running?
For example, when I'm running hadoop example's terasort, I need to get the teragen's generated data size before the job actually runs. If I'm running Wordcount example, I need to get the wordcount input file size. I need to get the data size automatically since I won't be able to know what job will be run inside this framework later.
I'm using Java to write some of the mesos library code. Preferably, I want to get the data size inside MesosExecutor class. For some reason, upgrading Hadoop/Mesos isn't an option.
Any suggestions or related API will be appreciated. Thank you.
Does hadoop fs -dus satisfy your requirement? Before submit the job to hadoop, calculate the input file size and pass it as params to your executor.

Resources