- Anatomy of a Spark Application
- Spark Applications Using the Standalone Scheduler
- Deployment Modes for Spark Applications Running on YARN
- Summary
Deployment Modes for Spark Applications Running on YARN
Two deployment modes can be used when submitting Spark applications to a YARN cluster: Client mode and Cluster mode. Let’s look at them now.
Client Mode
In Client mode, the Driver process runs on the client submitting the application. It is essentially unmanaged; if the Driver host fails, the application fails. Client mode is supported for both interactive shell sessions (pyspark,spark-shell, and so on) and non-interactive application submission (spark-submit). Listing 3.2 shows how to start apysparksession using the Client deployment mode.
Listing 3.2YARN Client Deployment Mode
$SPARK_HOME/bin/pyspark --master yarn-client --num-executors 1 --driver-memory 512m --executor-memory 512m --executor-cores 1# OR$SPARK_HOME/bin/pyspark --master yarn --deploy-mode client --num-executors 1 --driver-memory 512m --executor-memory 512m --executor-cores 1
Figure 3.7provides an overview of a Spark application running on YARN in Client mode.
Figure 3.7Spark application running in YARN Client mode.
The steps shown inFigure 3.7are described here:
客户提交一个火花clu应用程序ter Manager (the YARN ResourceManager). The Driver process, SparkSession, and SparkContext are created and run on the client.
The ResourceManager assigns an ApplicationMaster (the Spark Master) for the application.
The ApplicationMaster requests containers to be used for Executors from the ResourceManager. With the containers assigned, the Executors spawn.
The Driver, located on the client, then communicates with the Executors to marshal processing of tasks and stages of the Spark program. The Driver returns the progress, results, and status to the client.
The Client deployment mode is the simplest mode to use. However, it lacks the resiliency required for most production applications.
Cluster Mode
In contrast to the Client deployment mode, with a Spark application running in YARN Cluster mode, the Driver itself runs on the cluster as a subprocess of the ApplicationMaster. This provides resiliency: If the ApplicationMaster process hosting the Driver fails, it can be re-instantiated on another node in the cluster.
Listing 3.3 shows how to submit an application by usingspark-submitand the YARN Cluster deployment mode. Because the Driver is an asynchronous process running in the cluster, Cluster mode is not supported for the interactive shell applications (pysparkandspark-shell).
Listing 3.3YARN Cluster Deployment Mode
$SPARK_HOME/bin/spark-submit --master yarn-cluster --num-executors 1 --driver-memory 512m --executor-memory 512m --executor-cores 1 $SPARK_HOME/examples/src/main/python/pi.py 10000# OR$SPARK_HOME/bin/spark-submit --master yarn --deploy-mode cluster --num-executors 1 --driver-memory 512m --executor-memory 512m --executor-cores 1 $SPARK_HOME/examples/src/main/python/pi.py 10000
Figure 3.8provides an overview of a Spark application running on YARN in Cluster mode.
Figure 3.8Spark application running in YARN Cluster mode.
The steps shown inFigure 3.8are described here:
The client, a user process that invokesspark-submit, submits a Spark application to the Cluster Manager (the YARN ResourceManager).
The ResourceManager assigns an ApplicationMaster (the Spark Master) for the application. The Driver process is created on the same cluster node.
The ApplicationMaster requests containers for Executors from the ResourceManager. Executors are spawned within the containers allocated to the ApplicationMaster by the ResourceManager. The Driver then communicates with the Executors to marshal processing of tasks and stages of the Spark program.
The Driver, running on a node in the cluster, returns progress, results, and status to the client.
The Spark application web UI, as shown previously, is available from the ApplicationMaster host in the cluster; a link to this user interface is available from the YARN ResourceManager UI.
Local Mode Revisited
在本地模式下,司机,大师,Executor all run in a single JVM. As discussed earlier in this chapter, this is useful for development, unit testing, and debugging, but it has limited use for running production applications because it is not distributed and does not scale. Furthermore, failed tasks in a Spark application running in Local mode are not re-executed by default. You can override this behavior, however.
When running Spark in Local mode, the application UI is available athttp://localhost:4040. The Master and Worker UIs are not available when running in Local mode.