I would like to submit multiple spark-submit jobs with yarn. When I run
spark-submit --class myclass --master yarn --deploy-mode cluster blah blah
as it is now, I have to wait for the job to complete for me to submit more jobs. I see the heartbeat:
16/09/19 16:12:41 INFO yarn.Client: Application report for application_1474313490816_0015 (state: RUNNING)
16/09/19 16:12:42 INFO yarn.Client: Application report for application_1474313490816_0015 (state: RUNNING)
How can I tell yarn to pick up another job all from the same terminal. Ultimately I want to be able to run from a script where I cand send hundreds of jobs in one go.
Thank you.
Every user has a fixed capacity as specified in the yarn configuration. If you are allocated N executors (usually, you will be allocated some fixed number of vcores), and you want to run 100 jobs, you will need to specify the allocation to each of the jobs:
spark-submit --num-executors N/100 --executor-cores 5
Otherwise, the jobs will loop in accepted.
You can launch multiple jobs in parallel using & at the last of every invocation.
for i inseq 20; do spark-submit --master yarn --num-executors N/100 --executor-cores 5 blah blah &; done
Check dynamic allocation in spark
Check what scheduler is in use with Yarn, if FIFO change it to FAIR
How are you planning to allocate resources to N number of jobs on yarn?
Related
I made yarn-cluster which has only 1 work node, and it seems to work fine when I submit my spark application job. When I submit job more than one, jobs are on hadoop queue and process submitted application one by one. I want to process my applications parallelly, not one by one. Is there any configuration for this? or unable to do this on yarn?
Yarn submits jobs one by one by default.
For submit multiple jobs you can change amount of your executor cores:
spark-submit class /jar --executor-memory 2g --num-executors 15 --executor-cores 3 --master yarn --deploy-mode cluster
You also can change this properties in your yarn-site.xml
I am getting "Container... is running beyond virtual memory limits" error while running spark job in yarn cluster mode.
It is not possible to ignore this error or increase Vmem Pmem ratio.
Job is submitted through spark-submit with " --conf spark.driver.memory=2800m".
I think it is because default value of yarn.app.mapreduce.am.command-opts is 1G, so yarn kills my driver/AM as soon as my driver/AM uses more than 1G memory.
So I would like to pass "yarn.app.mapreduce.am.command-opts" to spark-submit in bash script. Passing it with "spark.driver.extraJavaOptions" errors out with "Not allowed to specify max heap(Xmx) memory settings through java options"
So how do I pass it ?
EDIT: I cannot edit conf files as that will make the change for all MR and spark jobs.
I have a cluster of 3 macOS machines running Hadoop and Spark-1.5.2 (though with Spark-2.0.0 the same problem exists). With 'yarn' as the Spark master URL, I am running into a strange issue where tasks are only allocated to 2 of the 3 machines.
Based on the Hadoop dashboard (port 8088 on the master) it is clear that all 3 nodes are part of the cluster. However, any Spark job I run only uses 2 executors.
For example here is the "Executors" tab on a lengthy run of the JavaWordCount example:
"batservers" is the master. There should be an additional slave, "batservers2", but it's just not there.
Why might this be?
Note that none of my YARN or Spark (or, for that matter, HDFS) configurations are unusual, except provisions for giving the YARN resource- and node-managers extra memory.
Remarkably, all it took was a detailed look at the spark-submit help message to discover the answer:
YARN-only:
...
--num-executors NUM Number of executors to launch (Default: 2).
If I specify --num-executors 3 in my spark-submit command, the 3rd node is used.
I know there are two modes while running spark applications on yarn cluster.
In yarn-cluster mode, the driver runs in the Application Master (inside a YARN cluster). In yarn-client mode, it runs in the client node where the job is submitted
I wanted to know what are the advantages of using one mode over the other ? Which mode we should use under what circumstances.
There are two deploy modes that can be used to launch Spark applications on YARN.
Yarn-cluster: the Spark driver runs within the Hadoop cluster as a YARN Application Master and spins up Spark executors within YARN containers. This allows Spark applications to run within the Hadoop cluster and be completely decoupled from the workbench, which is used only for job submission. An example:
[terminal~]:cd $SPARK_HOME
[terminal~]:./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn
–deploy-mode cluster --num-executors 3 --driver-memory 1g --executor-memory
2g --executor-cores 1 --queue thequeue $SPARK_HOME/examples/target/spark-examples_*-1.2.1.jar
Note that in the example above, the –queue option is used to specify the Hadoop queue to which the application is submitted.
Yarn-client: The Spark driver runs on the workbench itself with the Application Master operating in a reduced role. It only requests resources from YARN to ensure the Spark workers reside in the Hadoop cluster within YARN containers. This provides an interactive environment with distributed operations. Here’s an example of invoking Spark in this mode while ensuring it picks up the Hadoop LZO codec:
[terminal~]:cd $SPARK_HOME
[terminal~]:bin/spark-shell --master yarn --deploy-mode client --queue research
--driver-memory 512M --driver-class-path /opt/hadoop/share/hadoop/mapreduce/lib/hadoop-lzo-0.4.18-201409171947.jar
So when you want interactive environment for your job, you should use client mode. The yarn-client mode accepts commands from the spark-shell.
When you want to decouple your job from Spark workbench, use Yarn cluster mode.
I have recently set up a Spark cluster on Amazon EMR with 1 master and 2 slaves.
I can run pyspark, and submit jobs with spark-submit.
However, when I create a standalone job, like job.py, I create a SparkContext, like so:
sc=SparkContext("local", "App Name")
This doesn't seem right, but I'm not sure what to put there.
When I submit the job, I am sure it is not utilizing the whole cluster.
If I want to run a job against my entire cluster, say 4 processes per slave, what do I have to
a.) pass as arguments to spark-submit
b.) pass as arguments to SparkContext() in the script itself.
You can create spark context using
conf = SparkConf().setAppName(appName)
sc = SparkContext(conf=conf)
and you have to submit the program to spark-submit using the following command for spark standalone cluster
./bin/spark-submit --master spark://<sparkMasterIP>:7077 code.py
For Mesos cluster
./bin/spark-submit --master mesos://207.184.161.138:7077 code.py
For YARN cluster
./bin/spark-submit --master yarn --deploy-mode cluster code.py
For YARN master, the configuration would be read from HADOOP_CONF_DIR.