NB: renaming or removing configurations could cause trouble.
No runtime configuration named {{execConfig.name}} exists at the instance level.
Runtime configurations defined at the cluster level are only overrides of configurations defined in Administration
Config keys
Define here Spark configuration keys.
Keys not listed here are inherited from your system-wide Spark configuration.
Base URL of the image registry, in the form host/root. See doc. for more information.
Action to run before pushing an image.
This is usually only required for running on Amazon EKS or Azure AKS
Absolute path to a shell script. Receives the repository url, image name & tag as arguments.
This script should support being called on an already-existing image/tag.
Kubernetes Namespace to use. Variable expansion is supported
Spark-on-K8S jobs need end-users to authenticate to K8S. Select how to authenticate and authorize user jobs.
Advanced settings
kubectl context to use (empty = use default). This will not be taken into account if using dynamic K8S clusters
Path to the kubectl config file (empty = use default). This will not be taken into account if using dynamic K8S clusters
Advanced settings
Leave empty to use the default image (see doc. for more information about base images)
Name of the container for the Spark executor in the pod YAML (default is spark-executor)
Only connections for which the user has 'read connection details' rights will be considered
Comma-separated list of AWS connections names. The first connection on which the user has 'read connection details' will be used
Use the credential from the specified connection to access the Glue catalog client in the notebook
Only connections for which the user has 'read connection details' rights will be considered
Comma-separated list of Azure connections names. The first connection on which the user has 'read connection details' will be used
Only connections for which the user has 'read connection details' rights will be considered
Comma-separated list of GCP connections names. The first connection on which the user has 'read connection details' will be used
Cluster mode (Unsupported)
Upload DSS-related files to the cluster to speed up application launching in yarn-cluster mode
Name of DSS connection to use to push files on cluster
Location inside the connection to push files on cluster
Name of a SSH (DSS) connection to use to tunnel to the cluster. Use when DSS is behind a NAT from the cluster p.o.v.
Hostname of the tunnel remote host (overrides the output of the 'hostname' command)
Interactive SparkSQL (notebooks & charts)
Default configuration (recipes & ML)
Applies only to new recipes and machine learning tasks. Will use "default" if empty
Should the recipes and machine learning be created in "Global metastore" mode? Applies only to new recipes and machine learning tasks