What is the right way to identify bottlenecks in map/reduce jobs? - hadoop

In normal java development, if I want to improve the performance of an application my usual procedure would be to run the program with a profiler attached, or alternatively embed within the application a collection of instrumentation marks. In either case, the immediate goal is to identify the hot spot of the application, and subsequently to be able to measure the effect of the changes that I make.
What is the correct analog when the application is a map/reduce job running in a hadoop cluster?
What options are available for collecting performance data when jobs appear to be running more slowly than you would predict from running equivalent logic in your development sandbox?

Map/Reduce Framework
Watch the Job in the Job-Tracker. Here you will see how long the mappers and reducers take. A common example would be if you do too much work in the reducers. In that case you will notice that the mappers finish quite soon while the reducers take forever.
It might also be interesting to see if all your mappers take a similar amount of time. Maybe the job is held up by a few slow tasks? This could indicate a hardware defect in the cluster (in which case speculative execution could be the answer) or the workload is not distributed evenly enough.
The Operating System
Watch the nodes (either with something simple as top or with monitoring such as munin or ganglia) to see if your job is cpu bound or io bound. If for example your reduce phase is io bound you can increase the number of reducers you use.
Something else you might detect here is when your tasks are using to much memory. If the tasktrackers do not have enough RAM increasing the number of tasks per node might actually hurt performance. A monitor system might highlight the resulting swapping.
The Single Tasks
You can run a Mapper/Reducers in isolation for profiling. In this case you can use all the tools you already know.
If you think the performance problem appears only when the job is executed in the cluster you can measure the time of relevant portions of the code with System.nanoTime() and use System.outs to output some rough performance numbers.
Of course there is also the option of adding JVM-Parameters to the child JVMs and connecting a profiler remotely.

Related

How many reducers can simultaneously run?

Learning Big Data at Uni and I'm kind of confused on the topic of MapReduce. I was wondering how many reducers can run simultaneously. For example lets say if we had 864 reducers, how many could run simultaneously?
All of them can run simultaneously depending upon what is the state(health, i.e. no rouge/bad node) of cluster is, what is the capacity of the cluster is and also how free the cluster is. If there are other MR jobs running on the same cluster then out of your 864 reducers only few will go in running state, and once the capacity is free then another set of reducer will start running.
Also there is one case which happens sometimes is when your reducer/mapper keep on preempting each other and takes up the whole memory. Job fails in majority of this case. To avoid this we generally set less number of reducer.
One line answer is - all of them can run simultaneously; as each of the reducer performs an independent unit of task in map reduce framework.
Now, how many would actually run in parallel, or more precisely when each of them would be scheduled to run depends on many factors including but not limited to resource availability, scheduling mechanism, cluster configuration etc.

Spark Jobs on Yarn | Performance Tuning & Optimization

What is the best way to optimize the Spark Jobs deployed on Yarn based cluster ? .
Looking for changes based on configuration not code level. My Question is classically design level question, what approach should be used to optimized the Jobs that are either developed on Spark Streaming or Spark SQL.
There is myth that BigData is magic and your code will be work like a dream once deployed to a BigData cluster.
Every newbie have same belief :) There is also misconception that given configurations over web blogs will be working fine for every problem.
There is no shortcut for optimization or Tuning the Jobs over Hadoop without understating your cluster deeply.
But considering the below approach I'm certain that you'll be able to optimize your job within a couple of hours.
I prefer to apply the pure scientific approach to optimize the Jobs. Following steps can be followed specifically to start optimization of Jobs as baseline.
Understand the Block Size configured at cluster.
Check the maximum memory limit available for container/executor.
Under the VCores available for cluster
Optimize the rate of data specifically in case of Spark streaming real-time jobs. (This is most tricky park in Spark-streaming)
Consider the GC setting while optimization.
There is always room of optimization at code level, that need to be considered as well.
Control the block size optimally based on cluster configuration as per Step 1. based on data rate. Like in Spark it can be calculated batchinterval/blockinterval
Now the most important steps come here. The knowledge I'm sharing is more specific to real-time use cases like Spark streaming, SQL with Kafka.
First of all you need to know to know that at what number or messages/records your jobs work best. After it you can control the rate to that particular number and start configuration based experiments to optimize the jobs. Like I've done below and able to resolve performance issue with high throughput.
I have read some of parameters from Spark Configurations and check the impact on my jobs than i made the above grid and start the experiment with same job but with five difference configuration versions. Within three experiment I'm able to optimize my job. The green highlighted in above picture is magic formula for my jobs optimization.
Although the same parameters might be very helpful for similar use cases but obviously these parameter not covers everything.
Assuming that the application works i.e memory configuration is taken care of and we have at least one successful run of the application. I usually look for underutilisation of executors and try to minimise it. Here are the common questions worth asking to find opportunities for improving utilisation of cluster/executors:
How much of work is done in driver vs executor? Note that when the main spark application thread is in driver, executors are killing time.
Does you application have more tasks per stage than number of cores? If not, these cores will not be doing anything while in this stage.
Are your tasks uniform i.e not skewed. Since spark move computation from stage to stage (except for some stages that can be parallel), it is possible for most of your tasks to complete and yet the stage is still running because one of skewed task is still held up.
Shameless Plug (Author) Sparklens https://github.com/qubole/sparklens can answer these questions for you, automatically.
Some of things are not specific to the application itself. Say if your application has to shuffle lots of data, pick machines with better disks and network. Partition your data to avoid full data scans. Use columnar formats like parquet or ORC to avoid fetching data for columns you don't need all the time. The list is pretty long and some problems are known, but don't have good solutions yet.

hadoop: tasks not local with file?

I ran a hadoop job and when I look in some map tasks I see they are not running where the file's blocks are. E.g., the map task runs on slave1, but the file blocks (all of them) are in slave2. The files are all gzip.
Why is that happening and how to resolve?
UPDATE: note there are many pending tasks, so this is not a case of a node being idle and therefore hosting tasks that read from other nodes.
Hadoop's default (FIFO) scheduler works like this: When a node has spare capacity, it contacts the master and asks for more work. The master tries to assign a data-local task, or a rack-local task, but if it can't, it will assign any task in the queue (of waiting tasks) to that node. However, while this node was being assigned this non-local task (we'll call it task X), it is possible that another node also had spare capacity and contacted the master asking for work. Even if this node actually had a local copy of the data required by X, it will not be assigned that task because the other node was able to acquire the lock to the master slightly faster than the latter node. This results in poor data locality, but FAST task assignment.
In contrast, the Fair Scheduler uses a technique called delayed scheduling that achieves higher locality by delaying non-local task assignment for a "little bit" (configurable). It achieves higher locality but at a small cost of delaying some tasks.
Other people are working on better schedulers, and this may likely be improved in the future. For now, you can choose to use the Fair Scheduler if you wish to achieve higher data locality.
I disagree with #donald-miner's conclusion that "With a default replication factor of 3, you don't see very many tasks that are not data local." He is correct in noting that more replicas will give improve your locality %, but the percentage of data-local tasks may still be very low. I've also ran experiments myself and saw very low data locality with the FIFO scheduler. You could achieve high locality if your job is large (has many tasks), but for the more common, smaller jobs, they suffer from a problem called "head-of-line scheduling". Quoting from this paper:
The first locality problem occurs in small jobs (jobs that
have small input files and hence have a small number of data
blocks to read). The problem is that whenever a job reaches
the head of the sorted list [...] (i.e. has the fewest
running tasks), one of its tasks is launched on the next slot
that becomes free, no matter which node this slot is on. If
the head-of-line job is small, it is unlikely to have data on
the node that is given to it. For example, a job with data on
10% of nodes will only achieve 10% locality.
That paper goes on to cite numbers from a production cluster at Facebook, and they reported observing just 5% of data locality in a large, production environment.
Final note: Should you care if you have low data locality? Not too much. The running time of your jobs may be dominated by the stragglers (tasks that take longer to complete) and shuffle phase, so improving data locality would only have a very modest improve in running time (if any at all).
Unfortunately, the default scheduler isn't that smart. I'm not sure exactly what's going on, but I think it's using some sort of greedy-style scheduling where it tries to schedule what it can now for the next task, and then moves on. There could definitely be improvements made to the hadoop scheduler and there have been a few academic attempts and making hadoop scheduling more optimal.
This research paper shows that the default hadoop scheduler is not optimal. In the results, they show that increasing the replication factor to three improves data locality significantly, with diminishing returns after that.
So, why hasn't the default scheduler been improved? Here is my opinion/theory: With a default replication factor of 3, you don't see very many tasks that are not data local. By having more replicas, you give the schedule more flexibility to fit tasks in the right spots. Basically, it's a coincidence that you have 3 replicas, and the default scheduler takes advantage of that by being implemented in a lazy manner. Since you typically have 3 replicas for redundancy sake already... there isn't much motivation to help scheduler performance for people with a replication of 1.
If you have the space, I suggest just upping the replication factor to two or three. There really isn't much downside.

How to profile map reduce jobs on HBase

I have a map reduce job which runs over a HBase table. It scans the Hbase table after applying some scan filters and does some processing.
The job is taking long time, definitely much more than expected and feels like the performance deterioration is exponential (i.e, the first 90% completes much faster than the rest and after about 98% (of the mappers complete), seems like getting stuck in eternity like the limbo in the movie inception.
From high level there should be no reason for this uneven performance since each row in the scan is expected to behave similarly and the downstream service should have similar SLAs every row of the HBase table.
How do I debug and profile this job? Are there any tools available out there which would help me meter the system and pinpoint the misbehaving component?
There are a couple of ways to monitor and debug jobs like this.
The first is to look at logs for the RegionServers, Datanodes, and TaskTrackers and try to find any error messages. The JobTracker will also contain a breakdown of performance per task, you can look to see if any tasks are failing or getting killed along with messages as to why. That's the easiest most straightforward place to start
In my experience, slow MapReduce jobs with HBase indicate uneven key distributions across your regions. For TableInputFormats, the default split is a mapper per region, if one of your regions contains an uneven number of rows you are accessing or if a particular RegionServer has several regions that are being read by several mappers, that could cause slowdowns on the machine because of disk contention or network io.
For debugging the RegionServers, you can take a look at JProfiler this is mentioned in the HBase Wiki as the profiler they use. I've never used it, but it does have a probe for HBase. Standard CPU load via uptime or top and IO wait from iostat metrics would also allow you to identify which machines are slowing things down.
If you don't want to run a profiling tool, you could monitor the RegionServer WebUI and look to see if you have a lot of RPC requests queued up or if they are taking a long time, this is available is an easily parseable JSON format. That would allow you to pinpoint slowdowns for particular regions that your job is processing.
Network IO could also be a contributing factor. If you are running a MapReduce cluster separate from the HBase cluster, then all of the data has to be shipped to the TaskTrackers, so that may be saturating your network. Standard network monitoring tools could be used.
Another problem could simply be with the Scanner itself, turning on cacheblocks generally hurts performance during MR jobs in my experience. This is because of a high level of cache churn as you are generally only reading rows once during MR jobs. Also, filters attached to Scanners are applied server side, so if you are doing complex filtering that may cause higher latency.

Hadoop / AWS elastic map reduce performance

I am looking for a ballpark if any one has experience with this...
Does anyone have benchmarks on the speed of AWS's map reduce?
Lets say I have 100 million records and I am using hadoop streaming (a php script) to map, group, and reduce (with some simple php calculations). The average group will contain 1-6 records.
Also is it better/more cost effective to run a bunch of small instances or larger ones? I realize it is broken up into nodes within an instance but regardless will larger nodes have a higher I/O so that means faster per node per sever (and more cost efficient)?
Also with streaming how is the ratio of mappers vs reducers determined?
I don't know if you can give a meaningful benchmark -- it's kind of like asking how fast a computer program generally runs. It's not possible to say how fast your program will run without knowing anything about the script.
If you mean, how fast are the instances that power an EMR job, they're the same spec as the underlying instances that your specify, from AWS.
If you want a very rough take on the how EMR performs differently: I'd say you will probably run into I/O bottleneck before CPU bottleneck.
In theory this means you should run many small instances and ask for rack diversity, in order to maybe grab more I/O resources from across more machines rather than let them compete. In practice I've found that fewer, higher I/O instances can be more effective. But even this impression doesn't always hold -- really depends on how busy the zone is and where your jobs are scheduled.

Resources