I start the sparkling-shell with the following command.
./bin/sparkling-shell --num-executors 4 --executor-memory 4g --master yarn-client
I only ever get two executors. Is this an H2o problem, YARN problem, or Spark problem?
Mike
There can be multiple reasons for this behaviour.
YARN can give you only the amount of executors based on available resources ( memory, vcores ). If you ask for more then you have resources, it will give you max what it can.
It can be also case when you have dynamic allocation enabled. This means that that Spark will create new executors when they are needed.
In order to solve some technicalities in Sparkling Water we need to discover all available executors at the start of the application by creating artificial computation and trying to utilise the whole cluster. This might give you less number of executors as well.
I would suggest looking at https://github.com/h2oai/sparkling-water/blob/master/doc/tutorials/backends.rst where you can read more about the paragraph above and how it can be solved using so called external sparkling water backend.
You can also have a look here https://github.com/h2oai/sparkling-water/blob/master/doc/configuration/internal_backend_tuning.rst. This is Sparkling Water guide for tuning the configuration.
Kuba
I got over the problem by changing the following four values in cloudera manager
Setting Value
yarn.scheduler.maximum-allocation-vcores 8
yarn.nodemanager.resource.cpu-vcores 4
yarn.nodemanager.resource.cpu-vcores 4
yarn.scheduler.maximum-allocation-mb 16 GB
Related
I am running Sparkling waterover 36 Spark executors.
Due to Yarn's scheduling, some executors would preempt and comeback later.
Overall, there are 36 executors for the majority of time, just not always.
So far, my experience is that, as soon as 1 executor fails, the entire H2o instance halts, even if the missing executor comes back to life later.
I wonder if this is how Sparkling-waterbehaves? Or some preemptive capability needs to be turned on?
Anyone have a clue about this ?
[Summary]
What you are seeing is how Sparkling Water behaves.
[ Details... ]
Sparkling Water on YARN can run in two different ways:
the default way, where H2O nodes are embedded inside Spark executors and there is a single (Spark) YARN job,
the external H2O cluster way, where the Spark cluster and H2O cluster are separate YARN jobs (running in this mode requires more setup; if you were running in this way, you would know it)
H2O nodes do not support elastic cloud formation behavior. Which is to say, once an H2O cluster is formed, new nodes may not join the cluster (they are rejected) and existing nodes may not leave the cluster (the cluster becomes unusable).
As a result, YARN preemption must be disabled for the queue where H2O nodes are running. In the default way, it means the entire Spark job must run with YARN preemption disabled (and Spark dynamicAllocation disabled). For the external H2O cluster way, it means the H2O cluster must be run in a YARN queue with preemption disabled.
Other pieces of information that might help:
If you are just starting on a new problem with Sparkling Water (or H2O in general), prefer a small number of large memory nodes to a large number of small memory nodes; fewer things can go wrong that way,
To be more specific, if you are trying to run with 36 executors that each have 1 GB of executor memory, that's a really awful configuration; start with 4 executors x 10 GB instead,
In general you don't want to start Sparkling Water with executors less than 5 GB at all, and more memory is better,
If running in the default way, don't set the number of executor cores to be too small; machine learning is hungry for lots of CPU.
I'm trying to squeeze every single bit from my cluster when configuring the spark application but it seems I'm not understanding everything completely right. So I'm running the application on an AWS EMR cluster with 1 master and 2 core nodes from type m3.xlarge(15G ram and 4 vCPU for every node). This means that by default 11.25 GB are reserved on every node for applications scheduled by yarn. So the master node is used only by the resource manager(yarn) and that means the remaining 2 core nodes will be used to schedule applications(so we have 22.5G for that purpose). So far so good. But here comes the part which I don't get. I'm starting the spark application with the following parameters:
--driver-memory 4G --num-executors 4 --executor-cores 7 --executor-memory 4G
What this means by my perceptions(from what I found as information) is that for the driver will be allocated 4G and 4 executors will be launched with 4G every one of them. So a rough estimate makes it 5*4=20G(lets make them 21G with the expected memory reserves), which should be fine as we have 22.5G for applications. Here's a screenshot from the UI of the hadoop yarn after the launch:
What we can see is that 17.63 are used by the application but this a little bit less than the expected ~21G and this triggers the first question- what did happen here?
Then I go to the spark UI's executors page. Here comes the bigger question:
The executors are 3(not 4), the memory allocated for them and the driver is 2.1G(not the specified 4G). So hadoop yarn says 17.63G are used, but the spark says 8.4G are allocated. So, what is happening here? Is this related to the Capacity Scheduler(from the documentation I couldn't come up with this conclusion)?
Can you check whether spark.dynamicAllocation.enabled is turned on. If that is the case then spark your application may give resources back to the cluster if they are no longer used. The minimum number of executors to be launched at the startup will be decided by spark.executor.instances.
If that is not the case, what is your source for spark application and what is the partition size set for that, spark will literally map the partition size to the spark cores, if your source has only 10 partitions, and when you try to allocate 15 cores it will only use 10 cores because that is what is needed. I guess this might be the cause that spark has launched 3 executors instead of 4. Regarding memory i would recommend to revisit because you are asking for 4 executors and 1 driver with 4Gb each which would be 5*4+5*384MB approx equals to 22GB and you are trying to use up everything and not much is left for your OS and nodemanager to run that would not be the ideal way to do.
I have a 4 node cluster configured to have 1 Namenode and 3 datanodes. Im performing a TPCH benchmark and i would like to know how much data you think my cluster can handle without affecting query response times. My total available HD size is about 700GB, each node has cpu with 8 cores and 16GB of RAM.
I saw some calculations that we could do to find the volume limit but i didnt understand IT, if someone could explain on a simple way how to calculate the data volume that a cluster can handle it would be very helpful.
Thank you
You can use 70 to 80 % of space in ur cluster to store the data, remaining will be used for processing and to store intermediate results in ur cluster.
This way performance will not be impacted
As you mentioned, you already configured your 4 node cluster. You can go and check in NN webUI-->Configured capacity section to find out the storage details, Let me know if you find any difficulties.
We have 3 compute nodes with 8 cores each and are assigned 30 GB of RAM in our cluster and we are executing performance tests in order to get the optimal performance.
The optimal performance was achieved by considering the following parameters.
master yarn --num-executors 5 --executor-cores 4 --executor-memory 23g
The concern here is how is it working fine with 23*5=115g of memory when we have only 30*3=90g available for our cluster. We have tried by considering the executor memory from 16g to 25g but getting the optimal at 23g only.
Is there something missing from our end, would like to understand this concept.
I have a very small new EMR cluster to play around with and I'm trying to limit the number of concurrent mappers per node to 2. I tried this by tweaking the default cpu-vcores down to 2.
Formula used:
min((yarn.nodemanager.resource.memory-mb / mapreduce.map.memory.mb),
(yarn.nodemanager.resource.cpu-vcores / mapreduce.map.cpu.vcores))
Cluster configuration:
AMI version: 3.3.1
Hadoop distribution: Amazon 2.4.0
Core: 4 m1.large
Job Configuration:
yarn.nodemanager.resource.memory-mb:5120
mapreduce.map.memory.mb:768
yarn.nodemanager.resource.cpu-vcores: 2
mapreduce.map.cpu.vcores: 1
As a result, I am currently seeing 22 mappers running at the same time. Besides being wrong according to the formula, this does not make sense at all giving the I have 4 cores. Any thoughts?
I haven't experienced the second part of the formula (the one with vcores) to ever take place on small dedicated cluster I worked on (although it should have according to formula). I read somewhere also that YARN does not take cpu cores into account when allocating resources (i.e. it only allocates based on memory requirements).
As for the memory calculation, yarn.nodemanager.resource.memory-mb is a per node setting, but dashboards often give you cluster wide numbers, so before you divide the yarn.nodemanager.resource.memory-mb by mapreduce.map.memory.mb, multiply it with the number of nodes in your cluster, i.e.
(yarn.nodemanager.resource.memory-mb*number_of_nodes_in_cluster) / mapreduce.map.memory.mb