Difference between local and yarn in hadoop - hadoop

I have been trying to install Hadoop on a single node following the instructions written here. There are two sets of instructions, one for running a MapReduce job locally, and another for YARN.
What is difference between running a MapReduce job locally and running on YARN?

If you use local the map and reduce tasks are run in the same jvm. Usually this mode is used when we want to debug the code. Whereas if we use yarn resource manager which is in MRV2 comes into play and mappers and reducers will run in different nodes and different jvms with in the same node(if it is pseudo distributed mode).

Related

Understanding mapreduce.framework.name wrt Hadoop

I am learning Hadoop and came to know that that there are two versions of the framework viz: Hadoop1 and Hadoop2.
If my understanding is correct, in Hadoop1, the execution environment is based on two daemons viz TaskTracker and JobTracker whereas in Hadoop2 (aka yarn), the execution environment is based on "new daemons" viz ResourceManager, NodeManager, ApplicationMaster.
Please correct me if this is not correct.
I came to know of the following configuration parameter:
mapreduce.framework.name
possible values which it can take: local , classic , yarn
I don't understand what does they actually mean; for example if I install Hadoop 2 , then how can it have old execution environment (which has TaskTracker, JobTracker).
Can anyone help me what these values mean?
yarn stands for MR version 2.
classic is for MR version 1
local for local runs of the MR jobs.
MR V1 and MR V2 are just about how resources are managed and a job is executed. The current hadoop release is capable of both (and even in local lightweight mode). When you set the value as yarn, you are simply instructing the framework to use yarn way to execute the job. Similarly when you set it to local, you just telling the framework that there is no cluster for execution and its all within a JVM. It is not a different infrastructure for MR V1 and MR V2 framework; its just the way of job execution, which changes.
jobTracker, TaskTracker etc are all just daemon thread, which are spawned when needed and killed.
MRv1 uses the JobTracker to create and assign tasks to data nodes. This was found to be too inefficient when dealing with large cluster, leading to yarn
MRv2 (aka YARN, "Yet Another Resource Negotiator") has a Resource Manager for each cluster, and each data node runs a Node Manager. For each job, one slave node will act as the Application Master, monitoring resources/tasks, etc.
Local mode is given to simulate and debug MR application within a single machine/JVM.
EDIT: Based on comments
jps (Java Virtual Machine Process Status)is a JVM tool, which according to official page:
The jps tool lists the instrumented HotSpot Java Virtual Machines
(JVMs) on the target system. The tool is limited to reporting
information on JVMs for which it has the access permissions.
So,
jps is not a big data tool, rather a java tool which tells about JVM, however it does not divulge any information on processes running within the JVM.
It only list the JVM, it has access to. It means there still be certain JVMs which remains undetected.
Keeping the above points in mind, if you observed that jsp command emits different result based on hadoop deployment mode:
Local (or Standalone) mode: There are no daemons and everything runs on a single JVM.
Pseudo-Distributed mode: Each daemon(Namenode, Datanode etc) runs on its own JVM on a single host.
Distributed mode: Each Daemon run on its own JVM across a cluster of hosts.
Hence each of the processes may or may not run in same JVM and hence jps output will be different.
Now in distributed mode, the MR v2 framework works in default mode. i.e. yarn; hence you see yarn specific daemons running
Namenode
Datanode
ResourceManager
NodeManager
Apache Hadoop 1.x (MRv1) consists of the following daemons:
Namenode
Datanode
Jobtracker
Tasktracker
Note that NameNode and DataNode are common between two, because they are HDFS specific daemon, while other two are MR v1 and yarn specific.

One worker with two executor vs One worker per executor in spark

I am using spark-1.6 with standalone resource manager in client mode. Now, as it is supported to run multiple executors per worker in spark. Can anyone tell me the pros and cons of running which one should be preferred for the production environment?
Moreover, when spark comes with the pre-built binaries of hadoop-2.x why do we need to setup another hadoop cluster to run it in the yarn mode. What's the point of packing those jars in the spark. And what's the point of using the yarn when flexibility of multiple executors per worker is given in standalone mode

Determin whether slave nodes in hadoop cluster has been assigned tasks

I'm new to Hadoop and MapReduce. I just deployed a Hadoop cluster with one master machine and 32 slave machines. However when I start to run an example program, it seems that it just runs to slow. How can I determine whether a map/reduce task has really been assigned to a slave node for execution?
The example program is executed like that:
hadoop jar ${HADOOP_HOME}/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar pi 32 100
okay lots of possibilities there. Hadoop comes out to help in distributed task.
So if your code is written in way that everything is dependent then there is no use of 32 slaves. rather it will take overhead time to manage connection.
check your hadoopMasterIp:50070 if if all the datanodes(slave) is running or not. obviously if you did not change dfs.http.address in your core-site.xml.
The easiest way to take a look at Yarn Web UI. By default it uses port 8088 on your master node (change master in the URI by your own IP address):
http://master:8088/cluster
There you can see total resources of your cluster and list of all applications. For every application you can find out how many mappers/reducers were used and where (on what machine) they were executed.

Hadoop on Mesos uses only one node?

I have successfully set up Mesos 0.22.1 cluster on 5 nodes. I can run Marathon and Chronos tasks on all slave nodes. Now I’m trying to run Hadoop jobs using Mesos Scheduler. I have followed very good tutorial and I could run wordcount test job. But when I try to run some larger job (loading data from Kafka to HDFS using Camus) job is running without the errors, but uses only one node with one task tracker, though it has in total 30 map jobs, and my nodes configured to run 2 map jobs in parallel.
What am I missing? Shouldn’t Jobtracker split task to run in parallel on all available nodes using 2 Map slots on eash node?
And what is strange - on Jobtracker webpage cluster summary reports only 1 available node. Is it correct behavior?
Any ideas are greatly appreciated!

Hadoop YARN - performance of LocalJobRunner vs. cluster deployed job

I'm doing some tests with M/R jobs running on 2 nodes Hadoop 2.2.0 cluster. One thing I would like to understand is the performance considerations of running the job in local mode (not managed by the ResourceManager) and running it on YARN. Tests I made show it runs much much faster when the job is being executed via LocalJobRunner than when it being managed by YARN. When set up the cluster I was following the steps described here http://raseshmori.wordpress.com/2012/10/14/install-hadoop-nextgen-yarn-multi-node-cluster/ , perhaps there is some configuration the guide forgot to mention?
Thanks!
You'd run LocalJobRunner for tests and small examples. You'd use the cluster when you need to processes amounts of data that would justify using Hadoop in the first place (a.k.a "Big data").
When you run a small example the overhead of running things distributed overwhelms the benefits of parallelization
Arnon is right. I found out that in one of my usecases that running using LocalJobRunner is much faster than using yarn. Running using LocalJobRunner would run the map processes as in-process and in local machine. Jobs are not submitted to HDFS cluster. Hence, map tasks are not scheduled in multiple machines. So, use LocalJobRunner shall be used for unit testing the code. Thats it. For all other practical purposes, use yarn.

Resources