I run some batch jobs with data inputs that are constantly changing and I'm having problems provisioning capacity. I am using whirl to do the intial setup but once I start, for example, 5 machines I don't know how to add new machines to it while its running. I don't know in advance how complex or how large the data will be so I was wondering if there was a way to add new machines to a cluster and have it take effect right away(or with some delay but don't want to have to bring down the cluster and bring it up with the new nodes).
There is exact explanation how to add node:
http://wiki.apache.org/hadoop/FAQ#I_have_a_new_node_I_want_to_add_to_a_running_Hadoop_cluster.3B_how_do_I_start_services_on_just_one_node.3F
In the same time - I am not sure that already running jobs will take advantages of these nodes since planning where to run each task happens during job start time (as far as I understand).
I also think that it is more practical to run Task Trackers only on these transient nodes.
Check the files referred by the below parameters:
dfs.hosts => dfs.include
dfs.hosts.exclude
mapreduce.jobtracker.hosts.filename => mapred.include
mapreduce.jobtracker.hosts.exclude.filename
You can add the list of hosts to the files dfs.include and mapred.include and then run
hadoop mradmin -refreshNodes ;
hadoop dfsadmin -refreshNodes ;
That's all.
BTW, 'mradmin -refreshNodes' facility was added in 0.21
Nikhil
Related
I am trying to find out how many MASTER, CORE, TASK instances are optimal to my jobs. I couldn't find any tutorial that explains how do I figure it out.
How do I know if I need more than 1 core instance? What are the "symptoms" I would see in EMR's console in the metrics that would hint I need more than one core? So far when I tried the same job with 1*core+7*task instances it ran pretty much like on 8*core, but it doesn't make much sense to me. Or is it possible that my job is so much CPU bound that the IO is such minor? (I have a map-only job that parses apache log files into csv file)
Is there such a thing to have more than 1 master instance? If yes, when is it needed? I wonder, because my master node pretty much is just waiting for the other nodes to do the job (0%CPU) for 95% of the time.
Can the master and the core node be identical? I can have a master only cluster, when the 1 and only node does everything. It looks like it would be logical to be able to have a cluster with 1 node that is the master and the core , and the rest are task nodes, but it seems to be impossible to set it up that way with EMR. Why is that?
The master instance acts as a manager and coordinates everything that goes in the whole cluster. As such, it has to exist in every job flow you run but just one instance is all you need. Unless you are deploying a single-node cluster (in which case the master instance is the only node running), it does not do any heavy lifting as far as actual MapReducing is concerned, so the instance does not have to be a powerful machine.
The number of core instances that you need really depends on the job and how fast you want to process it, so there is no single correct answer. A good thing is that you can resize the core/task instance group, so if you think your job is running slow, then you can add more instances to a running process.
One important difference between core and task instance groups is that the core instances store actual data on HDFS whereas task instances do not. In turn, you can only increase the core instance group (because removing running instances would lose the data on those instances). On the other hand, you can both increase and decrease the task instance group by adding or removing task instances.
So these two types of instances can be used to adjust the processing power of your job. Typically, you use ondemand instances for core instances because they must be running all the time and cannot be lost, and you use spot instances for task instances because losing task instances do not kill the entire job (e.g., the tasks not finished by task instances will be rerun on core instances). This is one way to run a large cluster cost-effectively by using spot instances.
The general description of each instance type is available here:
http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/InstanceGroups.html
Also, this video may be useful for using EMR effectively:
https://www.youtube.com/watch?v=a5D_bs7E3uc
I'm running Hadoop 1.1.2 on a cluster with 10+ machines. I would like to nicely scale up and down, both for HDFS and MapReduce. By "nicely", I mean that I require that data not be lost (allow HDFS nodes to decomission), and nodes running a task finish before shutting down.
I've noticed the datanode process dies once decomissioning is done, which is good. This is what I do to remove a node:
Add node to mapred.exclude
Add node to hdfs.exclude
$ hadoop mradmin -refreshNodes
$ hadoop dfsadmin -refreshNodes
$ hadoop-daemon.sh stop tasktracker
To add the node back in (assuming it was removed like above), this is what I'm doing.
Remove from mapred.exclude
Remove from hdfs.exclude
$ hadoop mradmin -refreshNodes
$ hadoop dfsadmin -refreshNodes
$ hadoop-daemon.sh start tasktracker
$ hadoop-daemon.sh start datanode
Is this the correct way to scale up and down "nicely"? When scaling down, I'm noticing job-duration rises sharply for certain unlucky jobs (since the tasks they had running on the removed node need to be re-scheduled).
If you have not set dfs exclude file before, follow 1-3. Else start from 4.
Shut down the NameNode.
Set dfs.hosts.exclude to point to an empty exclude file.
Restart NameNode.
In the dfs exclude file, specify the nodes using the full hostname or IP or IP:port format.
Do the same in mapred.exclude
execute bin/hadoop dfsadmin -refreshNodes. This forces the NameNode to reread the exclude file and start the decommissioning process.
execute bin/hadoop mradmin -refreshNodes
Monitor the NameNode and JobTracker web UI and confirm the decommission process is in progress. It can take a few seconds to update. Messages like "Decommission complete for node XXXX.XXXX.X.XX:XXXXX" will appear in the NameNode log files when it finishes decommissioning, at which point you can remove the nodes from the cluster.
When the process has completed, the namenode UI will list the datanode as decommissioned. The Jobtracker page will show the updated number of active nodes. Run bin/hadoop dfsadmin -report to verify. Stop the datanode and tasktracker process on the excluded node(s).
If you do not plan to reintroduce the machine to the cluster, remove it from the
include and exclude files.
To add a node as datanode and tasktracker see Hadoop FAQ page
EDIT : When a live node is to be removed from the cluster, what happens to the Job ?
The jobs running on a node to be de-commissioned would get affected as the tasks of the job scheduled on that node(s) would be marked as KILLED_UNCLEAN (for map and reduce tasks) or KILLED (for job setup and cleanup tasks). See line 4633 in JobTracker.java for details. The job will be informed to fail that task. Most of the time, Job tracker will reschedule execution. However, after many repeated failures it may instead decide to allow the entire job to fail or succeed. See line 2957 onwards in JobInProgress.java.
You should be aware that since for Hadoop to perform well, it really wants to have the data available in multiple copies. By removing nodes, you remove the chances of the data being optimally available, and you put extra stress on the cluster to ensure the availablility.
I.e. by taking down a node, you do enfore that an extra copy of all its data is made somewhere else. So you shouldn't really be doing this just for fun, not unless you use a different data management paradigm than in the default configuration (= keep 3 copies in the cluster).
And for a Hadoop cluster to perform well, you will want to actually store the data in the cluster. Otherwise, you can't really move the computation to the data, because the data isn't there yet either. Much about Hadoop is about having "smart drives" that can perform computation before sending the data across the network.
So in order to make this reasonable, you will likely need to somehow split your cluster. Have one set of nodes keep the 3 master copies of the original data, and have some "add-on" nodes that are only used for storing intermediate data and perform computations on that part. Never change the master nodes, so they don't need to redistribute your data. Shut down add-on nodes only when they are empty? But that probably is not yet implemented.
While decommissioning in progress, temporary or staging files get cleaned automatically. These files are missing now and hadoop is not recognizing how that went missing. So the decommissioning process keeps waiting until that is resolved even though the actual decommissioning is done for all the other files.
In Hadoop GUI - if you notice the parameter "Number of Under-Replicated Blocks" is not reducing over the time or almost constant then this is the reason likely.
So list the files using below command
hadoop fsck / -files -blocks -racks
If you see those files are temporary and not required then delete those files or folder
Example: hadoop fs -rmr /var/local/hadoop/hadoop/.staging/* (give the correct path here)
This would solve the problem immediately. De-commissioned nodes will move to Dead Nodes in 5 mins.
Does it have any measurable effect on resources whether I submit a bunch of hadoop jobs from different client servers or all from the same one? I would think not since all the work is done in the cluster. Is this correct?
The only thing which is resource intensive on the client submitting to the Hadoop cluster is the calculation of the input splits. When the input data is huge or when too many jobs are submitted from the same client then because of the input split calculations, the job submission might become a bit slow.
I am not able to recall the Hadoop release or the parameter, but a configurable parameter was included to move the calculation of the input splits from the client submitting a job to the Hadoop cluster.
It really shouldn't matter where you submit your jobs from. The client itself doesn't do much, it uses RPC protocol to contact the services, and then just sits idle until the job is finished.
Also, the most important is what kind of scheduler you use to allocate resource, which is probably going to make the most significant difference and decide which resources to allocate to which job. More on job scheduling here.
I don't think you can move the input split calculation into Job Tracker in 'Classic' version. In YARN, you can move it using
"yarn.app.mapreduce.am.compute-splits-in-cluster"
I am guessing, Hadoop people didn't want to overload Job tracker with input split creation. Similar to the design decision of not assigning too much work for Namenode in HDFS.
In YARN, every job gets its own Application Master, so no worries about overloading a SPOF/bottleneck master like job tracker.
In reference to the original question, the client job would have to reach out to the namenode to get the block locations (I have see parts of code on block storage class calling data node for some meta data...not sure whether these happen during input split creation or in task tracker node) . This can become an issue if you are handling a lot of jobs on the same client node.
If you are using YARN, there would be a slight performance increase if all these communications happen inside the cluster.
Need to check how Oozie handles this issue.
Hopefully, this helps!
Arun
I have a Fully-Distributed Hadoop cluster with 4 nodes.When I submit my job to Jobtracker which decide 12 map tasks will be cool for my job,something strange happens.The 12 map tasks always running on a single node instead of running on the entire cluster.Before I ask the question ,I have already done the things below:
Try different Job
Run start-balance.sh to rebalance the cluster
But it does not work,so I hope someone can tell me why and how to fix it.
If all the blocks of input data files are in that node, the scheduler with prioritize the same node
Apparently the source data files is in one data node now. It could't be the balancer's fault. From what I can see, your hdfs must only have one replication or you are not in a Fully-Distributed Hadoop cluster.
Check how your input is being split. You may only have one input split, meaning that only one Node will be used to process the data. You can test this by adding more input files to your stem and placing them on different nodes, then checking which nodes are doing the work.
If that doesn't work, check to make sure that your cluster is configured correctly. Specifically, check that your name node has paths to your other nodes set in its slaves file, and that each slave node has your name node set in its masters file.
I'm running an Amazon EMR cluster that has M core instances and N task instances.
My jobs run multiple times per day and are time sensitive so I am keeping the M core instances up and running 24/7 so that I don't have data transfer overhead to/from S3.
The N task nodes are being dynamically launched and terminated as needed.
The M core nodes are c1.mediums and the N task nodes are m2.xlarge.
Is there a way to configure mapred.tasktracker.map.tasks.maximum and mapred.tasktracker.reduce.tasks.maximum per instance?
For the core nodes I want:
mapred.tasktracker.map.tasks.maximum=2
mapred.tasktracker.reduce.tasks.maximum=1
For the task nodes I want at least:
mapred.tasktracker.map.tasks.maximum=2
mapred.tasktracker.reduce.tasks.maximum=2
Note that task trackers run on the core nodes as well, so I think this configuration will need to be on a per-instance basis depending on the instance size.
Is this possible? And if so how can I set up this type of configuration?
There is a great blog here - which gives you the answer.
http://blog.earlh.com/index.php/2013/05/modifying-the-number-of-mappers-or-reducers-on-a-running-emr-cluster/
Note though that you might have to play around a bit with sshing into your task nodes. It will not work just like that.
I would get my pem file onto a local directory.
chmod 400 on that pem file
and then do "scp -l hadoop -i .pem and then the rest of of it"
as mentioned in the blog
Mind you I have not tried this yet but I believe it will work.
Also - the .versions... stuff may not be needed. You will probably just need conf.
Thanks