Spark: confused about number of executors and what web ui displays - performance

I'm beginning with Spark and I'm a little bit confused about the tuning part. My cluster has 3 machines (128G each) and each of them has 2 sockets, 8 cores per socket and 2 threads per core.
First of all, what is the maximum number of executors that I have access to (theoretically, with only 1 core par executor)? Is it 48 (3*8*2) or 96 (3*8*2*2)?
Second, when I try to launch Spark with the following parameters (assuming that the answer to the preceding question was 48):
pyspark --master spark://... --num-executors 8 --executor-cores 5 --executor-memory 38g
why does the web ui (port 4040, executors tab) show only 4 executors with the following info:
executor id address storage memory
driver {mynode1} 0/384M
0 (mynode2} 0/21.6G
1 (mynode1} 0/21.6G
2 (mynode3} 0/21.6G
How should I interpret these numbers? How do they relate to the parameters I specified in the command line? How come I have only 4 executors and not 8? Thanks for help me clarifying this!
EDIT: thanks for the link #user6910411, it helped me but it does not fully answer my question. There is this question about the total number of cores that I have access to (actually I realized that on spark UI url:8080, I see the total number of cores available per worker and this number is 6 cores per worker; I don't understand why this is 6 and not 16 or 32 given the numbers of sockets (8) and cores per socket (2) and threads per core (2) that I have. There is also the question about the memory shown on the executors page that is not the one I specify when I invoke pyspark. Example:
pyspark --master spark://... --executor-cores 1 --executor-memory 4g --total-executor-cores 18
On the web UI, I see that the PySparkShell is consuming 18 cores and 4G per node (I asked for 4G per executor) and on the executors page, I see my 18 executors, each having 2G of memory. Other experiments let me think that this number is always the half of the number I request. Maybe it's related to this number of cores per socket or threads per core (2 in each case)?

Related

How is Spark using 15 cores when I only have 4 on a m4.xlarge EMR instance?

I have looked at the answer to
Why is Spark detecting 8 cores, when I only have 4?
And it doesn't seem to explain the following scenario: I am setting the spark.executor.cores at 5. I have spark.dynamicAllocation.enabled set to true. According to the Spark History Server, my 10 node cluster is running 30 executors, indicating that spark is using 3 executors per node. This seems to suggest that 15 cores are available (3 executors x 5 cores) per node. The specs for an m4.xlarge instance are 4 vCPUs with 16 GB of memory. Where are these extra cores coming from?
Note: I am setting spark.executor.memory at 3g and yarn.nodemanager.resource.memory-mb at 12200.

Spark streaming application configuration with YARN

I'm trying to squeeze every single bit from my cluster when configuring the spark application but it seems I'm not understanding everything completely right. So I'm running the application on an AWS EMR cluster with 1 master and 2 core nodes from type m3.xlarge(15G ram and 4 vCPU for every node). This means that by default 11.25 GB are reserved on every node for applications scheduled by yarn. So the master node is used only by the resource manager(yarn) and that means the remaining 2 core nodes will be used to schedule applications(so we have 22.5G for that purpose). So far so good. But here comes the part which I don't get. I'm starting the spark application with the following parameters:
--driver-memory 4G --num-executors 4 --executor-cores 7 --executor-memory 4G
What this means by my perceptions(from what I found as information) is that for the driver will be allocated 4G and 4 executors will be launched with 4G every one of them. So a rough estimate makes it 5*4=20G(lets make them 21G with the expected memory reserves), which should be fine as we have 22.5G for applications. Here's a screenshot from the UI of the hadoop yarn after the launch:
What we can see is that 17.63 are used by the application but this a little bit less than the expected ~21G and this triggers the first question- what did happen here?
Then I go to the spark UI's executors page. Here comes the bigger question:
The executors are 3(not 4), the memory allocated for them and the driver is 2.1G(not the specified 4G). So hadoop yarn says 17.63G are used, but the spark says 8.4G are allocated. So, what is happening here? Is this related to the Capacity Scheduler(from the documentation I couldn't come up with this conclusion)?
Can you check whether spark.dynamicAllocation.enabled is turned on. If that is the case then spark your application may give resources back to the cluster if they are no longer used. The minimum number of executors to be launched at the startup will be decided by spark.executor.instances.
If that is not the case, what is your source for spark application and what is the partition size set for that, spark will literally map the partition size to the spark cores, if your source has only 10 partitions, and when you try to allocate 15 cores it will only use 10 cores because that is what is needed. I guess this might be the cause that spark has launched 3 executors instead of 4. Regarding memory i would recommend to revisit because you are asking for 4 executors and 1 driver with 4Gb each which would be 5*4+5*384MB approx equals to 22GB and you are trying to use up everything and not much is left for your OS and nodemanager to run that would not be the ideal way to do.

Why does cloudera recommend choosing the number of executors, cores, and RAM they do in Spark

In the blog post:
http://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-2/
I was going about it in the naive way:
given 16 cores, 64 RAM, 8 threads - use 15 cores, 63 RAM, 6 executors.
Instead they recommend 17 executors, 5 cores, and 19 RAM. I see that they have an equation for RAM but I have no idea what’s going on.
Also does this still hold true if you are running it only on one machine(not through HDFS)?
Thanks for the help
I think they did a great job of explaining why here:
(Look at the slides starting at slide 5).
For example, they don't recommend more than 5 cores per executor because lots of cores can lead to poor HDFS I/O throughput.
They recommend deciding on RAM as follows: once you have the number of executors per node (in the article, it is 3), you take the total memory and divide by executors/node. So, you have 63 GB RAM / 3 executors per node = 21 GB (take a bit away and get 19 GB - not clear why they do this.).
You certainly have the right idea when it comes to leaving some resources for the application master / overhead!
These options are optimized for cluster computing. But this makes sense since Spark is a cluster computing engine.

Apache Spark tuning on distributed environment

I'd maximize Hadoop performance in a distributed environment (using Apache Spark with Yarn) and I'm following the hints on a blog post of Cloudera with this configuration:
6 nodes, 16 core/node, ram 64G/node
and the proposed solution is:
--num-executors 17 --executor-cores 5 --executor-memory 19G
But i didn't understand why they use 17 num executors (in other words 3 executors for each node).
Our configuration is instead:
8 nodes, 8 core/node, ram 8G/node
What is the best solution?
Your ram is pretty low. I would expect this to be higher.
But, we start off with 8 nodes, and 8 cores. To determine our max executors we do nodes*(cores-1) = 56. Minus 1 core from each node for management.
So I would start off with
56 executors, 1 executor core, 1G ram.
If you have out of memory issues, double the ram, have the executors, up the cores.
28 executors, 2 executor cores, 2G ram
but your max executors will be less, because an executor must fit onto a node. You will be able to get a total of 24 allocated containers max.
I would try 3 cores before 4 cores next, as 3 cores will fit 2 executors on each node, while with 4 cores you will have the same executors as 7.
Or, you can skip right to...
8 executors, 7 cores, 7gig ram(want to leave some for the rest of cluster).
I also found if CPU Scheduling was disabled, yarn was overriding my cores setting, and it was always staying at 1, no matter my config. Other settings must also be changed to turn this on.
yarn.schedular.capacity.resource-calculator=org.apache.hadoop.yarn.util.resource.DominantResourceCalculator

Apache Spark: The number of cores vs. the number of executors

I'm trying to understand the relationship of the number of cores and the number of executors when running a Spark job on YARN.
The test environment is as follows:
Number of data nodes: 3
Data node machine spec:
CPU: Core i7-4790 (# of cores: 4, # of threads: 8)
RAM: 32GB (8GB x 4)
HDD: 8TB (2TB x 4)
Network: 1Gb
Spark version: 1.0.0
Hadoop version: 2.4.0 (Hortonworks HDP 2.1)
Spark job flow: sc.textFile -> filter -> map -> filter -> mapToPair -> reduceByKey -> map -> saveAsTextFile
Input data
Type: single text file
Size: 165GB
Number of lines: 454,568,833
Output
Number of lines after second filter: 310,640,717
Number of lines of the result file: 99,848,268
Size of the result file: 41GB
The job was run with following configurations:
--master yarn-client --executor-memory 19G --executor-cores 7 --num-executors 3 (executors per data node, use as much as cores)
--master yarn-client --executor-memory 19G --executor-cores 4 --num-executors 3 (# of cores reduced)
--master yarn-client --executor-memory 4G --executor-cores 2 --num-executors 12 (less core, more executor)
Elapsed times:
50 min 15 sec
55 min 48 sec
31 min 23 sec
To my surprise, (3) was much faster.
I thought that (1) would be faster, since there would be less inter-executor communication when shuffling.
Although # of cores of (1) is fewer than (3), #of cores is not the key factor since 2) did perform well.
(Followings were added after pwilmot's answer.)
For the information, the performance monitor screen capture is as follows:
Ganglia data node summary for (1) - job started at 04:37.
Ganglia data node summary for (3) - job started at 19:47. Please ignore the graph before that time.
The graph roughly divides into 2 sections:
First: from start to reduceByKey: CPU intensive, no network activity
Second: after reduceByKey: CPU lowers, network I/O is done.
As the graph shows, (1) can use as much CPU power as it was given. So, it might not be the problem of the number of the threads.
How to explain this result?
To hopefully make all of this a little more concrete, here’s a worked example of configuring a Spark app to use as much of the cluster as
possible: Imagine a cluster with six nodes running NodeManagers, each
equipped with 16 cores and 64GB of memory. The NodeManager capacities,
yarn.nodemanager.resource.memory-mb and
yarn.nodemanager.resource.cpu-vcores, should probably be set to 63 *
1024 = 64512 (megabytes) and 15 respectively. We avoid allocating 100%
of the resources to YARN containers because the node needs some
resources to run the OS and Hadoop daemons. In this case, we leave a
gigabyte and a core for these system processes. Cloudera Manager helps
by accounting for these and configuring these YARN properties
automatically.
The likely first impulse would be to use --num-executors 6
--executor-cores 15 --executor-memory 63G. However, this is the wrong approach because:
63GB + the executor memory overhead won’t fit within the 63GB capacity
of the NodeManagers. The application master will take up a core on one
of the nodes, meaning that there won’t be room for a 15-core executor
on that node. 15 cores per executor can lead to bad HDFS I/O
throughput.
A better option would be to use --num-executors 17
--executor-cores 5 --executor-memory 19G. Why?
This config results in three executors on all nodes except for the one
with the AM, which will have two executors.
--executor-memory was derived as (63/3 executors per node) = 21. 21 * 0.07 = 1.47. 21 – 1.47 ~ 19.
The explanation was given in an article in Cloudera's blog, How-to: Tune Your Apache Spark Jobs (Part 2).
Short answer: I think tgbaggio is right. You hit HDFS throughput limits on your executors.
I think the answer here may be a little simpler than some of the recommendations here.
The clue for me is in the cluster network graph. For run 1 the utilization is steady at ~50 M bytes/s. For run 3 the steady utilization is doubled, around 100 M bytes/s.
From the cloudera blog post shared by DzOrd, you can see this important quote:
I’ve noticed that the HDFS client has trouble with tons of concurrent threads. A rough guess is that at most five tasks per executor can achieve full write throughput, so it’s good to keep the number of cores per executor below that number.
So, let's do a few calculations see what performance we expect if that is true.
Run 1: 19 GB, 7 cores, 3 executors
3 executors x 7 threads = 21 threads
with 7 cores per executor, we expect limited IO to HDFS (maxes out at ~5 cores)
effective throughput ~= 3 executors x 5 threads = 15 threads
Run 3: 4 GB, 2 cores, 12 executors
2 executors x 12 threads = 24 threads
2 cores per executor, so hdfs throughput is ok
effective throughput ~= 12 executors x 2 threads = 24 threads
If the job is 100% limited by concurrency (the number of threads). We would expect runtime to be perfectly inversely correlated with the number of threads.
ratio_num_threads = nthread_job1 / nthread_job3 = 15/24 = 0.625
inv_ratio_runtime = 1/(duration_job1 / duration_job3) = 1/(50/31) = 31/50 = 0.62
So ratio_num_threads ~= inv_ratio_runtime, and it looks like we are network limited.
This same effect explains the difference between Run 1 and Run 2.
Run 2: 19 GB, 4 cores, 3 executors
3 executors x 4 threads = 12 threads
with 4 cores per executor, ok IO to HDFS
effective throughput ~= 3 executors x 4 threads = 12 threads
Comparing the number of effective threads and the runtime:
ratio_num_threads = nthread_job2 / nthread_job1 = 12/15 = 0.8
inv_ratio_runtime = 1/(duration_job2 / duration_job1) = 1/(55/50) = 50/55 = 0.91
It's not as perfect as the last comparison, but we still see a similar drop in performance when we lose threads.
Now for the last bit: why is it the case that we get better performance with more threads, esp. more threads than the number of CPUs?
A good explanation of the difference between parallelism (what we get by dividing up data onto multiple CPUs) and concurrency (what we get when we use multiple threads to do work on a single CPU) is provided in this great post by Rob Pike: Concurrency is not parallelism.
The short explanation is that if a Spark job is interacting with a file system or network the CPU spends a lot of time waiting on communication with those interfaces and not spending a lot of time actually "doing work". By giving those CPUs more than 1 task to work on at a time, they are spending less time waiting and more time working, and you see better performance.
As you run your spark app on top of HDFS, according to Sandy Ryza
I’ve noticed that the HDFS client has trouble with tons of concurrent
threads. A rough guess is that at most five tasks per executor can
achieve full write throughput, so it’s good to keep the number of
cores per executor below that number.
So I believe that your first configuration is slower than third one is because of bad HDFS I/O throughput
From the excellent resources available at RStudio's Sparklyr package page:
SPARK DEFINITIONS:
It may be useful to provide some simple definitions
for the Spark nomenclature:
Node: A server
Worker Node: A server that is part of the cluster and are available to
run Spark jobs
Master Node: The server that coordinates the Worker nodes.
Executor: A sort of virtual machine inside a node. One Node can have
multiple Executors.
Driver Node: The Node that initiates the Spark session. Typically,
this will be the server where sparklyr is located.
Driver (Executor): The Driver Node will also show up in the Executor
list.
I haven't played with these settings myself so this is just speculation but if we think about this issue as normal cores and threads in a distributed system then in your cluster you can use up to 12 cores (4 * 3 machines) and 24 threads (8 * 3 machines). In your first two examples you are giving your job a fair number of cores (potential computation space) but the number of threads (jobs) to run on those cores is so limited that you aren't able to use much of the processing power allocated and thus the job is slower even though there is more computation resources allocated.
you mention that your concern was in the shuffle step - while it is nice to limit the overhead in the shuffle step it is generally much more important to utilize the parallelization of the cluster. Think about the extreme case - a single threaded program with zero shuffle.
I think one of the major reasons is locality. Your input file size is 165G, the file's related blocks certainly distributed over multiple DataNodes, more executors can avoid network copy.
Try to set executor num equal blocks count, i think can be faster.
Spark Dynamic allocation gives flexibility and allocates resources dynamically. In this number of min and max executors can be given. Also the number of executors that has to be launched at the starting of the application can also be given.
Read below on the same:
http://spark.apache.org/docs/latest/configuration.html#dynamic-allocation
There is a small issue in the First two configurations i think. The concepts of threads and cores like follows. The concept of threading is if the cores are ideal then use that core to process the data. So the memory is not fully utilized in first two cases. If you want to bench mark this example choose the machines which has more than 10 cores on each machine. Then do the bench mark.
But dont give more than 5 cores per executor there will be bottle neck on i/o performance.
So the best machines to do this bench marking might be data nodes which have 10 cores.
Data node machine spec:
CPU: Core i7-4790 (# of cores: 10, # of threads: 20)
RAM: 32GB (8GB x 4)
HDD: 8TB (2TB x 4)
In the 2.) configuration you're reducing the parallel tasks and thus I believe your comparison isn't fair.
Make the --num-executors to atleast 5.
Thus, you will have 20 tasks running in comparison to your 21 tasks in 1.) configuration.
Then, the comparison will be fair as per me.
Also, please calculate the executor memory accordingly.

Resources