Spark not detecting all CPU cores - performance

I have a cluster running Spark with 4 servers each having 8 cores. Somehow the master is not detecting all available cores. It is using 18 out of 32 cores:
I have not set anything relating to the no. of cores in any spark conf file (at least not that I am aware of)
I am positive each cluster member has the same no. of cores (8):
Is there a way to make Spark detect/use the other cores as well?

I found it but still it is somewhat unclear:
One node that was only contributing 1 out of 8 cores was having this setting turned on in $SPARK_HOME/conf/spark-env.sh:
SPARK_WORKER_CORES=1
Commenting it out did the trick for that node. Spark will grab all cores by default. (same goes for memory)
But... on the other node with only 1 core this setting was not activated, but still Spark did not grab 8 cores untill I specifically told it to:
SPARK_WORKER_CORES=8
But at least it is grabbing all resources now.

Related

How to work with a group of people using Zeppelin?

I a trying to work with Zeppelin on my Hadoop Cluster:
1 edge node
1 name node
1 secondary node
16 data nodes.
Node specification:
CPU: Intel(R) Xeon(R) CPU E5345 # 2.33GHz, 8 cores
Memory: 32 GB DDR2
I have some issues with this tool when more than 20 people want to use it at the same time.
This is mainly when I am using pyspark - either 1.6 or 2.0.
Even if I set zeppelin.execution.memory = 512 mb and spark.executor memory = 512 mb is still the same. I have tried a few interpreter options (for pyspark), like Per User in scoped/isolated and others and still the same. It is a little better with globally option but still after a while I can not do anything there. I was looking on Edge Node and I saw that memory is going up very fast. I want to use Edge Node only as an access point.
If your deploy mode is yarn client, then your driver will always be the access point server (the Edge Node in your case).
Every notebook (per note mode) or every user (per user mode) instantiates a spark context allocating memory on the driver and on the executors. Reducing spark.executor.memory will alleviate the cluster but not the driver. Try reducing spark.driver.memory instead.
The Spark interpreter can be instantiated globally, per note or per user, I don't think sharing the same interpreter (globally) is a solution in your case, since you can only run one job at a time. Users would end up waiting for every one else's cells to compile before being able to do so themselves.

Using hadoop cluster with different machine configuration

I have two linux machines, both with different configuration
Machine 1: 16 GB RAM, 4 Virtual Cores and 40 GB HDD (Master and Slave Machine)
Machine 2: 8 GB RAM, 2 Virtual Cores and 40 GB HDD (Slave machine)
I have set up a hadoop cluster between these two machines.
I am using Machine 1 as both master and slave.
And Machine 2 as slave.
I want to run my spark application and utilise as much as Virtual Cores and memory as possible but I am unable to figure out what settings.
My spark code looks something like:
conf = SparkConf().setAppName("Simple Application")
sc = SparkContext('spark://master:7077')
hc = HiveContext(sc)
sqlContext = SQLContext(sc)
spark = SparkSession.builder.appName("SimpleApplication").master("yarn-cluster").getOrCreate()
So far, I have tried the following:
When I process my 2 GB file only on Machine 1 (in local mode as Single node cluster), it uses all the 4 CPUs of the machine and completes in about 8 mins.
When I process my 2 GB file with cluster configuration as above, it takes slightly longer than 8 mins, though I expected, it would take less time.
What number of executors, cores, memory do I need to set to maximize the usage of cluster?
I have referred below articles but because I have different machine configuration in my case, not sure what parameter would fit best.
Apache Spark: The number of cores vs. the number of executors
Any help will be greatly appreciated.
When I process my 2 GB file with cluster configuration as above, it takes slightly longer than 8 mins, though I expected, it would take less time.
Its not clear where your file is stored.
I see you're using Spark Standalone mode, so I'll assume it's not split on HDFS into about 16 blocks (given block size of 128MB).
In that scenario, your entire file will processed at least once in whole, plus the overhead of shuffling that data amongst the network.
If you used YARN as the Spark master with HDFS as the FileSystem, and a splittable file format, then the computation would go "to the data", which you could expect quicker run times.
As far as optimal settings, there's tradeoffs between cores&memory and amount of executors, but there's no magic number for a particular workload and you'll always be limited by the smallest node in the cluster, keeping in mind the memory of the Spark driver and other processes on the OS should be accounted for when calculating sizes

Spark streaming application configuration with YARN

I'm trying to squeeze every single bit from my cluster when configuring the spark application but it seems I'm not understanding everything completely right. So I'm running the application on an AWS EMR cluster with 1 master and 2 core nodes from type m3.xlarge(15G ram and 4 vCPU for every node). This means that by default 11.25 GB are reserved on every node for applications scheduled by yarn. So the master node is used only by the resource manager(yarn) and that means the remaining 2 core nodes will be used to schedule applications(so we have 22.5G for that purpose). So far so good. But here comes the part which I don't get. I'm starting the spark application with the following parameters:
--driver-memory 4G --num-executors 4 --executor-cores 7 --executor-memory 4G
What this means by my perceptions(from what I found as information) is that for the driver will be allocated 4G and 4 executors will be launched with 4G every one of them. So a rough estimate makes it 5*4=20G(lets make them 21G with the expected memory reserves), which should be fine as we have 22.5G for applications. Here's a screenshot from the UI of the hadoop yarn after the launch:
What we can see is that 17.63 are used by the application but this a little bit less than the expected ~21G and this triggers the first question- what did happen here?
Then I go to the spark UI's executors page. Here comes the bigger question:
The executors are 3(not 4), the memory allocated for them and the driver is 2.1G(not the specified 4G). So hadoop yarn says 17.63G are used, but the spark says 8.4G are allocated. So, what is happening here? Is this related to the Capacity Scheduler(from the documentation I couldn't come up with this conclusion)?
Can you check whether spark.dynamicAllocation.enabled is turned on. If that is the case then spark your application may give resources back to the cluster if they are no longer used. The minimum number of executors to be launched at the startup will be decided by spark.executor.instances.
If that is not the case, what is your source for spark application and what is the partition size set for that, spark will literally map the partition size to the spark cores, if your source has only 10 partitions, and when you try to allocate 15 cores it will only use 10 cores because that is what is needed. I guess this might be the cause that spark has launched 3 executors instead of 4. Regarding memory i would recommend to revisit because you are asking for 4 executors and 1 driver with 4Gb each which would be 5*4+5*384MB approx equals to 22GB and you are trying to use up everything and not much is left for your OS and nodemanager to run that would not be the ideal way to do.

hadoop not creating enough containers when more nodes are used

So I'm trying to run some hadoop jobs on AWS R3.4xLarge machines. They have 16 vcores and 122 gigabytes of ram available.
Each of my mappers requires about 8 gigs of ram and one thread, so these machines are very nearly perfect for the job.
I have mapreduce.memory.mb set to 8192,
and mapreduce.map.java.opts set to -Xmx6144
This should result in approximately 14 mappers (in practice nearer to 12) running on each machine.
This is in fact the case for a 2 slave setup, where the scheduler shows 90 percent utilization of the cluster.
When scaling to, say, 4 slaves however, it seems that hadoop simply doesnt create more mappers. In fact it creates LESS.
On my 2 slave setup I had just under 30 mappers running at any one time, on four slaves I had about 20. The machines were sitting at just under 50 percent utilization.
The vcores are there, the physical memory is there. What the heck is missing? Why is hadoop not creating more containers?
So it turns out that this is one of those hadoop things that never makes sense, no matter how hard you try to figure it out.
there is a setting in yarn-default called yarn.nodemanager.heartbeat.interval-ms.
This is set to 1000. Apparently it controls the minimum period between assigning containers in milliseconds.
This means it only creates one new map task per second. This means the number of containers is limited by how many containers I have running*the time that it takes for a container to be finished.
By setting this value to 50, or better yet, 1, I was able to get the kind of scaling that is expected from a hadoop cluster. Honestly should be documented better.

Why does cloudera recommend choosing the number of executors, cores, and RAM they do in Spark

In the blog post:
http://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-2/
I was going about it in the naive way:
given 16 cores, 64 RAM, 8 threads - use 15 cores, 63 RAM, 6 executors.
Instead they recommend 17 executors, 5 cores, and 19 RAM. I see that they have an equation for RAM but I have no idea what’s going on.
Also does this still hold true if you are running it only on one machine(not through HDFS)?
Thanks for the help
I think they did a great job of explaining why here:
(Look at the slides starting at slide 5).
For example, they don't recommend more than 5 cores per executor because lots of cores can lead to poor HDFS I/O throughput.
They recommend deciding on RAM as follows: once you have the number of executors per node (in the article, it is 3), you take the total memory and divide by executors/node. So, you have 63 GB RAM / 3 executors per node = 21 GB (take a bit away and get 19 GB - not clear why they do this.).
You certainly have the right idea when it comes to leaving some resources for the application master / overhead!
These options are optimized for cluster computing. But this makes sense since Spark is a cluster computing engine.

Resources