Number of partitions in RDD and performance in Spark - performance

In Pyspark, I can create a RDD from a list and decide how many partitions to have:
sc = SparkContext()
sc.parallelize(xrange(0, 10), 4)
How does the number of partitions I decide to partition my RDD in influence the performance?
And how does this depend on the number of core my machine has?

The primary effect would be by specifying too few partitions or far too many partitions.
Too few partitions You will not utilize all of the cores available in the cluster.
Too many partitions There will be excessive overhead in managing many small tasks.
Between the two the first one is far more impactful on performance. Scheduling too many smalls tasks is a relatively small impact at this point for partition counts below 1000. If you have on the order of tens of thousands of partitions then spark gets very slow.

To add to javadba's excellent answer, I recall the docs recommend to have your number of partitions set to 3 or 4 times the number of CPU cores in your cluster so that the work gets distributed more evenly among the available CPU cores. Meaning, if you only have 1 partition per CPU core in the cluster you will have to wait for the one longest running task to complete but if you had broken that down further the workload would be more evenly balanced with fast and slow running tasks evening out.

Number of partition have high impact on spark's code performance.
Ideally the spark partition implies how much data you want to shuffle. Normally you should set this parameter on your shuffle size(shuffle read/write) and then you can set the number of partition as 128 to 256 MB per partition to gain maximum performance.
You can set partition in your spark sql code by setting the property as:
spark.sql.shuffle.partitions
or
while using any dataframe you can set this by below:
df.repartition(numOfPartitions)

Related

Spark partition on nodes foreachpartition

I have a spark cluster (DataProc) with a master and 4 workers (2 preemtible), in my code I have some thing like this:
JavaRDD<Signal> rdd_data = javaSparkContext.parallelize(myArray);
rdd_data.foreachPartition(partitionOfRecords -> {
while (partitionOfRecords.hasNext()) {
MyData d = partitionOfRecords.next();
LOG.info("my data: " + d.getId().toString());
}
})
myArray is composed by 1200 MyData objects.
I don't understand why spark uses only 2 cores, divide my array into 2 partitions, and doesn't use 16 cores.
I need to set the number of partition?
Thanks in advance for any help.
Generally it's always a good idea to specific the number of partitions as the second argument to parallelize since the optimal slicing of your dataset should really be independent from the particular shape of the cluster you're using, and Spark can at best use current sizes of executors as a "hint".
What you're seeing here is that Spark will default to asking taskScheduler for current number of executor cores to use as the defaultParallelism, combined with the fact that in Dataproc Spark dynamic allocation is enabled. Dynamic allocation is important because otherwise a single job submitted to a cluster might just specify max executors even if it sits idle and then it will prevent other jobs from being able to use those idle resources.
So on Dataproc, if you're using default n1-standard-4, Dataproc configures 2 executors per machine and gives each executor 2 cores. The value of spark.dynamicAllocation.minExecutors should be 1, so your default job, upon startup without doing any work, would sit on 1 executor with 2 cores. Then taskScheduler will report that 2 cores are currently reserved in total, and therefore defaultParallelism will be 2.
If you had a large cluster and you were already running a job for awhile (say, you have a map phase that runs for longer than 60 seconds) you'd expect dynamic allocation to have taken all available resources, so the next step of the job that uses defaultParallelism would then presumably be 16, which is the total cores on your cluster (or possibly 14, if 2 are consumed by an appmaster).
In practice, you probably want to parallelize into a larger number of partitions than total cores available anyways. Then if there's any skew in how long each element takes to process, you can have nice balancing where fast tasks finish and then those executors can start taking on new partitions while the slow ones are still running, instead of always having to wait for a single slowest partition to finish. It's common to choose a number of partitions anywhere from 2x the number of available cores to something 100x or more.
Here's another related StackOverflow question: spark.default.parallelism for Parallelize RDD defaults to 2 for spark submit

Spark Executors hang after Out of Memory

I have a spark application running on EMR (16 nodes, 1 master, 15 core, r3.2xlarge instances). For spark executor configuration, we use dynamic Allocation.
While loading the data into the RDD, I see that sometimes when there's a huge amount of data (700 Gb), then Spark runs Out of Memory, but it does not fail the App. Rather the app sits there hung. I'm not sure why this happens but here is my theory :-
We use dataframes which might be caching things.
The spark flag spark.dynamicAllocation.cachedExecutorIdleTimeout is set to infinity
My theory is that it might be caching things while creating dataframes but the cache is never relinquished and this leads to a Spark hang.
There are two solutions
Increase cluster size (worse case)
Figure out a way to add a timeout to Spark app.
Programatically kill the EMR step (could not find an API which does this)
Any leads about how to go about it ?
There could be two other possibilities. Either the partitions are too big, or you have sever skewness (size of partitions varies a lot).
Try to increase the number of partitions (anf hence, reduce their size) using repartition. This will randomly reshuffle the data throughout your executors (good to reduce skewness, but slow). Ideally, I like my partitions to be around 64Mo, depending on your machines.

How do I decide number of mappers for camus?

I just started with Camus.
I am planning to run Camus, every one hour. We get around ~80000000 messages every hour and average message size is 4KB (we have a single topic in Kafka).
I first tried with 10 mappers, it took ~2hours to copy one hour's data and it created 10 files with ~7GB size.
Then I tried 300 mappers, it brought down the time to ~1 hour. But it created 11 files. Later, I tried with 150 mappers and it took ~30 minutes.
So, how do I choose the number of mappers in this? Also, I want to create more files in hadoop as one size is growing to 7GB. What configuration do I have to check?
It should ideally be equal or less than the kafka partitions you have , in your topic .
That means , for better throughput you topic should have more partitions and same number of camus mappers
I have found best answer in this article
The number of maps is usually driven by the number of DFS blocks in the input files. It causes people to adjust their DFS block size to adjust the number of maps.
The right level of parallelism for maps seems to be around 10-100 maps/node, although we have taken it up to 300 or so for very cpu-light map tasks.
It is best if the maps take at least a minute to execute.
All depends on the power of CPU you have, the type of application - IO Bound (heavy read/write) Or CPU bound ( heavy processing) and number of nodes in your Hadoop cluster.
Apart from setting number of mappers and reducers at global level, override those values at Job level depending on data to be processing needs of the Job.
And one more thing in the end : If you think Combiner reduces the IO transfers between Mapper and Reducer, use it effectively in combination with Partitioner

Spark SQL performance with Simple Scans

I am using Spark 1.4 on a cluster (stand-alone mode), across 3 machines, for a workload similar to TPCH (analytical queries with multiple/multi-way large joins and aggregations). Each machine has 12GB of Memory and 4 cores. My total data size is 150GB, stored in HDFS (stored as Hive tables), and I am running my queries through Spark SQL using hive context.
After checking the performance tuning documents on the spark page and some clips from latest spark summit, I decided to set the following configs in my spark-env:
SPARK_WORKER_INSTANCES=4
SPARK_WORKER_CORES=1
SPARK_WORKER_MEMORY=2500M
(As my tasks tend to be long so the overhead of starting multiple JVMs, one per worker is much less than the total query times). As I monitor the job progress, I realized that while the Worker memory is 2.5GB, the executors (one per worker) have max memory of 512MB (which is default). I enlarged this value in my application as:
conf.set("spark.executor.memory", "2.5g");
Trying to give max available memory on each worker to its only executor, but I observed that my queries are running slower than the prev case (default 512MB). Changing 2.5g to 1g improved the performance time, it is close to but still worse than 512MB case. I guess what I am missing here is what is the relationship between the "WORKER_MEMORY" and 'executor.memory'.
Isn't it the case that WORKER tries to split this memory among its executors (in my case its only executor) ? Or there are other stuff being done worker which need memory ?
What other important parameters I need to look into and tune at this point to get the best response time out of my HW ? (I have read about Kryo serializer, and I am about trying that - I am mainly concerned about memory related settings and also knobs related to parallelism of my jobs). As an example, for a simple scan-only query, Spark is worse than Hive (almost 3 times slower) while both are scanning the exact same table & file format. That is why I believe I am missing some params by leaving them as defaults.
Any hint/suggestion would be highly appreciated.
Spark_worker_cores is shared across the instances. Increase the cores to say 8 - then you should see the kind of behavior (and performance) that you had anticipated.

How to decide on the number of partitions required for input data size and cluster resources?

My use case as mentioned below.
Read input data from local file system using sparkContext.textFile(input path).
partition the input data(80 million records) into partitions using RDD.coalesce(numberOfPArtitions) before submitting it to mapper/reducer function. Without using coalesce() or repartition() on the input data spark executes really slow and fails with out of memory exception.
The issue i am facing here is in deciding the number of partitions to be applied on the input data. The input data size varies every time and hard coding a particular value is not an option. And spark performs really well only when certain optimum partition is applied on the input data for which i have to perform lots of iteration(trial and error). Which is not an option in a production environment.
My question: Is there a thumb rule to decide the number of partitions required depending on the input data size and cluster resources available(executors,cores, etc...)? If yes please point me in that direction. Any help is much appreciated.
I am using spark 1.0 on yarn.
Thanks,
AG
Two notes from Tuning Spark in the Spark official documentation:
1- In general, we recommend 2-3 tasks per CPU core in your cluster.
2- Spark can efficiently support tasks as short as 200 ms, because it reuses one executor JVM across many tasks and it has a low task launching cost, so you can safely increase the level of parallelism to more than the number of cores in your clusters.
These are two rule of tumb that help you to estimate the number and size of partitions. So, It's better to have small tasks (that could be completed in hundred ms).
Determining the number of partitions is a bit tricky. Spark by default will try and infer a sensible number of partitions. Note: if you are using the textFile method with compressed text then Spark will disable splitting and then you will need to re-partition (it sounds like this might be whats happening?). With non-compressed data when you are loading with sc.textFile you can also specify a minium number of partitions (e.g. sc.textFile(path, minPartitions) ).
The coalesce function is only used to reduce the number of partitions, so you should consider using the repartition() function.
As far as choosing a "good" number you generally want at least as many as the number of executors for parallelism. There already exists some logic to try and determine a "good" amount of parallelism, and you can get this value by calling sc.defaultParallelism
I assume you know the size of the cluster going in,
then you can essentially try to partition the data in some multiples of
that & use rangepartitioner to partition the data roughly equally. Dynamic
partitions are created based on number of blocks on filesystem & hence the
task overhead of scheduling so many tasks mostly kills the performance.
import org.apache.spark.RangePartitioner;
var file=sc.textFile("<my local path>")
var partitionedFile=file.map(x=>(x,1))
var data= partitionedFile.partitionBy(new RangePartitioner(3, partitionedFile))

Resources