I planning for spark streaming on on multi node cluster. What kind of health check scripts do i need to on spark cluster.Can any one provide any sample?
Like to check if spark is running well or not or any node goes down etc..
Well not all the things you wanted to do comes in one piece.
For spark jobs - you need to make a decision on how you want to handle the failures. This is based on your business requirements like - should the entire job fail for one bad row, or just continue job and accumulate bad records. This would be on assumption that all worker nodes are good
Based on which distribution you used, you can manage the node health. Cloudera Distribution gives lot of details about the health, and when you see red signals regarding the memory etc.
With oozie or any workflow management, you can also configure email alerts for your jobs when they fail
Related
I am running a wordcount topology in a Storm cluster composed on 2 nodes. One node is the Master node (with Nimbus, UI and Logviewer) and both of then are Supervisor with 1 Worker each. In other words, my Master node is also a Supervisor, and the second node is only a Supervisor. As I said, there is 1 Worker per Supervisor.
The topology I am using is configured so that it is using these 2 Workers (setNumWorkers(2)). In details, the topology has 1 Spout with 2 threads, 1 split Bolt and 1 count Bolt. When I deploy the topology with the default scheduler, the first Supervisor has 1 Spout thread and the split Bolt, and the second Supervisor has 1 Spout thread and the count Bolt.
Given this context, how can I control the placement of operators (Spout/Bolt) between these 2 Workers? For research purpose, I need to have some control over the placement of these operators between nodes. However, the mechanism seems to be transparent within Storm and such a control is not available for the end-user.
I hope my question is clear enough. Feel free to ask for additional details. I am aware that I may need to dig into Storm's source code and recompile. That's fine. I am looking for a starting point and advices on how to proceed.
The version of Storm I am using is 2.1.0.
Scheduling is handled by a pluggable scheduler in Storm. See the documentation at http://storm.apache.org/releases/2.1.0/Storm-Scheduler.html.
You may want to look at the DefaultScheduler for reference https://github.com/apache/storm/blob/v2.1.0/storm-server/src/main/java/org/apache/storm/scheduler/DefaultScheduler.java. This is the default scheduler used by Storm, and has a bit of handling for banning "bad" workers from the assignment, but otherwise largely just does round robin assignment.
If you don't want to implement a cluster-wide scheduler, you might be able to set your cluster to use the ResourceAwareScheduler, and use a topology-level scheduling strategy instead. You would set this by setting config.setTopologyStrategy(YourStrategyHere.class) when you submit your topology. You will want to implement this interface https://github.com/apache/storm/blob/e909b3d604367e7c47c3bbf3ec8e7f6b672ff778/storm-server/src/main/java/org/apache/storm/scheduler/resource/strategies/scheduling/IStrategy.java#L43 and you can find an example implementation at https://github.com/apache/storm/blob/c427119f24bc0b14f81706ab4ad03404aa85aede/storm-server/src/main/java/org/apache/storm/scheduler/resource/strategies/scheduling/DefaultResourceAwareStrategy.java
Edit: If you implement either an IStrategy or IScheduler, they need to go in a jar that you put in storm/lib on the Nimbus machine. The strategy or scheduler needs to be on the classpath of the Nimbus process.
If I am using Oozie to run MapReduce job, is there a specific number about how many mappers will be started?
Is it:
one for Oozie and one for map-reduce job or
one for Oozie and one mapper for every 64MB block(default block size)
The above answers focus on how many maps and reduces a mapreduce job needs. However as you specifically ask about oozie, I will share my experience on mapreduce (in pig) via Oozie.
Explanation
When you kick off an oozie workflow, you need 1 yarn application for this. I am not sure what the logic is, but it appears that these applications usually require 1 map, and occasionally 2.
Besides the above, you still need the same amount of mappers and reducers to do the actual work as if you did not use oozie. (If you see a different number than you expected, this may be because you passed specific parameters on map or reduce properties when calling the script).
Warning
The above means, that if you were to have 100 available containers, and kickoff 100 workflows (for example by starting a daily job with a startdate of 100 days in the past), it is likely that the workflows take up all available containers, and the actual work is suspended indefinitely.
Short answer : Oozie launches mapreduce job by submitting one maponly job to the cluster called Oozie launcher. Agree with #Dennis Jaheruddin.
Detail answer after my research : Oozie's execution model
Oozie’s execution model is different from
the default approach users take to run Hadoop jobs. When a user
invokes the Hadoop, Hive, or Pig CLI tool from a Hadoop edge node, the
corresponding client executable runs on that node which is configured
to contact and submit jobs to the Hadoop cluster. When the same jobs
are defined and submitted via an Oozie workflow action, things work
differently.
Let’s say you are submitting a workflow job using the Oozie CLI on the
edge node. The Oozie client actually submits the workflow to the Oozie
server, which typically runs on a different node. Regardless of where
it runs, it’s the Oozie server’s responsibility to submit and run the
underlying MapReduce jobs on the Hadoop cluster. Oozie doesn’t do so
by using the standard client tools installed locally on the Oozie
server node. Instead, it first submits a MapReduce job called the
“launcher job,” which in turn runs the Hadoop, Hive, or Pig job using
the appropriate client APIs.
Imp Note : The Oozie launcher is basically a map-only job running a single mapper
on the Hadoop cluster. This map job knows what to do for the specific
action it’s supposed to run and does the appropriate thing by using
the libraries for Hadoop, Pig, etc. This will result in other
MapReduce jobs being spun up as required. These Oozie jobs are called
“asynchronous actions” in Oozie parlance. Oozie doesn’t run these
actions in its own server, but kicks them off on the Hadoop cluster
using a launcher job. The reason Oozie server “outsources” the
launcher to the Hadoop cluster is to protect itself from unexpected
workloads and also to isolate user code from its own services. After
all, Oozie has access to an awesome distributed system in the form of
a Hadoop cluster.
Coming to Mapreduce actions you can set number of maptasks but there is no guarantee, it will depend as described below.
The number of maps is usually driven by the total size of the inputs,
that is, the total number of blocks of the input files.
setting number of maps - Suggestion (actually based on inputsplits)
setting number of reducer - Demand
Number of Maps
The number of maps is usually driven by the number of DFS blocks in the input files. Although that causes people to adjust their DFS block size to adjust the number of maps. The right level of parallelism for maps seems to be around 10-100 maps/node, although we have taken it up to 300 or so for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute
Number of mapper depend on number of logical input splits it do not depends on number of blocks. You can control number of input splits by your programme.
Refer this https://hadoopi.wordpress.com/2013/05/27/understand-recordreader-inputsplit/ for more information about how input splits effects number of mapper and how to create input splits.
I'm running a hive query against a hadoop cluster of 3 nodes. And I am getting an error which says "Too many fetch failures". My hive query is:
insert overwrite table tablename1 partition(namep)
select id,name,substring(name,5,2) as namep from tablename2;
that's the query im trying to run. All i want to do is transfer data from tablename2 to tablename1. Any help is appreciated.
This can be caused by various hadoop configuration issues. Here a couple to look for in particular:
DNS issue : examine your /etc/hosts
Not enough http threads on the mapper side for the reducer
Some suggested fixes (from Cloudera troubleshooting)
set mapred.reduce.slowstart.completed.maps = 0.80
tasktracker.http.threads = 80
mapred.reduce.parallel.copies = sqrt (node count) but in any case >= 10
Here is link to troubleshooting for more details
http://www.slideshare.net/cloudera/hadoop-troubleshooting-101-kate-ting-cloudera
Update for 2020 Things have changed a lot and AWS mostly rules the roost. Here is some troubleshooting for it
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-troubleshoot-error-resource-1.html
Too many fetch-failures
PDF
Kindle
The presence of "Too many fetch-failures" or "Error reading task output" error messages in step or task attempt logs indicates the running task is dependent on the output of another task. This often occurs when a reduce task is queued to execute and requires the output of one or more map tasks and the output is not yet available.
There are several reasons the output may not be available:
The prerequisite task is still processing. This is often a map task.
The data may be unavailable due to poor network connectivity if the data is located on a different instance.
If HDFS is used to retrieve the output, there may be an issue with HDFS.
The most common cause of this error is that the previous task is still processing. This is especially likely if the errors are occurring when the reduce tasks are first trying to run. You can check whether this is the case by reviewing the syslog log for the cluster step that is returning the error. If the syslog shows both map and reduce tasks making progress, this indicates that the reduce phase has started while there are map tasks that have not yet completed.
One thing to look for in the logs is a map progress percentage that goes to 100% and then drops back to a lower value. When the map percentage is at 100%, this does not mean that all map tasks are completed. It simply means that Hadoop is executing all the map tasks. If this value drops back below 100%, it means that a map task has failed and, depending on the configuration, Hadoop may try to reschedule the task. If the map percentage stays at 100% in the logs, look at the CloudWatch metrics, specifically RunningMapTasks, to check whether the map task is still processing. You can also find this information using the Hadoop web interface on the master node.
If you are seeing this issue, there are several things you can try:
Instruct the reduce phase to wait longer before starting. You can do this by altering the Hadoop configuration setting mapred.reduce.slowstart.completed.maps to a longer time. For more information, see Create Bootstrap Actions to Install Additional Software.
Match the reducer count to the total reducer capability of the cluster. You do this by adjusting the Hadoop configuration setting mapred.reduce.tasks for the job.
Use a combiner class code to minimize the amount of outputs that need to be fetched.
Check that there are no issues with the Amazon EC2 service that are affecting the network performance of the cluster. You can do this using the Service Health Dashboard.
Review the CPU and memory resources of the instances in your cluster to make sure that your data processing is not overwhelming the resources of your nodes. For more information, see Configure Cluster Hardware and Networking.
Check the version of the Amazon Machine Image (AMI) used in your Amazon EMR cluster. If the version is 2.3.0 through 2.4.4 inclusive, update to a later version. AMI versions in the specified range use a version of Jetty that may fail to deliver output from the map phase. The fetch error occurs when the reducers cannot obtain output from the map phase.
Jetty is an open-source HTTP server that is used for machine to machine communications within a Hadoop cluster
I am trying to find out how many MASTER, CORE, TASK instances are optimal to my jobs. I couldn't find any tutorial that explains how do I figure it out.
How do I know if I need more than 1 core instance? What are the "symptoms" I would see in EMR's console in the metrics that would hint I need more than one core? So far when I tried the same job with 1*core+7*task instances it ran pretty much like on 8*core, but it doesn't make much sense to me. Or is it possible that my job is so much CPU bound that the IO is such minor? (I have a map-only job that parses apache log files into csv file)
Is there such a thing to have more than 1 master instance? If yes, when is it needed? I wonder, because my master node pretty much is just waiting for the other nodes to do the job (0%CPU) for 95% of the time.
Can the master and the core node be identical? I can have a master only cluster, when the 1 and only node does everything. It looks like it would be logical to be able to have a cluster with 1 node that is the master and the core , and the rest are task nodes, but it seems to be impossible to set it up that way with EMR. Why is that?
The master instance acts as a manager and coordinates everything that goes in the whole cluster. As such, it has to exist in every job flow you run but just one instance is all you need. Unless you are deploying a single-node cluster (in which case the master instance is the only node running), it does not do any heavy lifting as far as actual MapReducing is concerned, so the instance does not have to be a powerful machine.
The number of core instances that you need really depends on the job and how fast you want to process it, so there is no single correct answer. A good thing is that you can resize the core/task instance group, so if you think your job is running slow, then you can add more instances to a running process.
One important difference between core and task instance groups is that the core instances store actual data on HDFS whereas task instances do not. In turn, you can only increase the core instance group (because removing running instances would lose the data on those instances). On the other hand, you can both increase and decrease the task instance group by adding or removing task instances.
So these two types of instances can be used to adjust the processing power of your job. Typically, you use ondemand instances for core instances because they must be running all the time and cannot be lost, and you use spot instances for task instances because losing task instances do not kill the entire job (e.g., the tasks not finished by task instances will be rerun on core instances). This is one way to run a large cluster cost-effectively by using spot instances.
The general description of each instance type is available here:
http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/InstanceGroups.html
Also, this video may be useful for using EMR effectively:
https://www.youtube.com/watch?v=a5D_bs7E3uc
Does it have any measurable effect on resources whether I submit a bunch of hadoop jobs from different client servers or all from the same one? I would think not since all the work is done in the cluster. Is this correct?
The only thing which is resource intensive on the client submitting to the Hadoop cluster is the calculation of the input splits. When the input data is huge or when too many jobs are submitted from the same client then because of the input split calculations, the job submission might become a bit slow.
I am not able to recall the Hadoop release or the parameter, but a configurable parameter was included to move the calculation of the input splits from the client submitting a job to the Hadoop cluster.
It really shouldn't matter where you submit your jobs from. The client itself doesn't do much, it uses RPC protocol to contact the services, and then just sits idle until the job is finished.
Also, the most important is what kind of scheduler you use to allocate resource, which is probably going to make the most significant difference and decide which resources to allocate to which job. More on job scheduling here.
I don't think you can move the input split calculation into Job Tracker in 'Classic' version. In YARN, you can move it using
"yarn.app.mapreduce.am.compute-splits-in-cluster"
I am guessing, Hadoop people didn't want to overload Job tracker with input split creation. Similar to the design decision of not assigning too much work for Namenode in HDFS.
In YARN, every job gets its own Application Master, so no worries about overloading a SPOF/bottleneck master like job tracker.
In reference to the original question, the client job would have to reach out to the namenode to get the block locations (I have see parts of code on block storage class calling data node for some meta data...not sure whether these happen during input split creation or in task tracker node) . This can become an issue if you are handling a lot of jobs on the same client node.
If you are using YARN, there would be a slight performance increase if all these communications happen inside the cluster.
Need to check how Oozie handles this issue.
Hopefully, this helps!
Arun