General

{{(lightStatus.infraBasicInfo.type === 'SAGEMAKER') ? 'SageMaker' : 'AzureML'}} endpoint name edition is disabled in this infrastructure.
You can choose to deploy using this name for the created SageMaker endpoint. The name must be unique within an AWS Region in your AWS account. Can only contain letters, digits and hyphens. Once deployed, it cannot be changed.
Leave empty to use an autogenerated name for the endpoint.
You can choose to deploy using this name for the created AzureML Online endpoint. The name must be unique in the Azure region. For more information on the naming rules, see managed online endpoint limits. Once deployed, it cannot be changed. Leave empty to use an autogenerated name for the endpoint.
Vertex AI endpoint id edition is disabled in this infrastructure.
You can choose to deploy using this id for the created Vertex AI endpoint. The id must be unique within an GCP Region and Project. Once deployed, it cannot be changed. Can only contain lowercase letters, digits and hyphens. The first character must be a letter and there can be at most 63 characters.
Leave empty to use an autogenerated id for the endpoint.

Unity Catalog

Databricks Unity Catalog settings edition is disabled in this infrastructure.
If unchecked, settings defined at the infrastructure level are used
If unchecked, models will be added to Workspace Model Registry instead of Unity Catalog.
If unchecked, models will be added to Workspace Model Registry instead of Unity Catalog.

Names

The default catalog/schema name is not defined in the infrastructure settings
Fetching models... Optional. If empty, an autogenerated value containing the entry endpoint's model name will be used. Model names must consist of up to 61 alphanumeric characters, hyphens, and underscores.
The default experiment location is not defined in the infrastructure settings
Fetching experiments...
Optional. If empty, an autogenerated value containing the deployment name will be used. The value represents a path to an existing Databricks Workspace directory where experiments will be stored.
Databricks endpoint name edition is disabled in this infrastructure.

Optional. If empty, an autogenerated value containing the deployment name will be used. The name must be unique within the scope of Databricks. Once deployed, it cannot be changed.
Endpoint names must consist of up to 63 symbols: alphanumeric characters with non-leading and non-trailing hyphens and underscores.
Snowpark names edition is disabled in this infrastructure.
You can choose to deploy using this name for the created Snowpark service. Once deployed, it cannot be changed.
Leave empty to use an autogenerated name for the service.
You can choose to deploy using this name for the created Snowpark UDF. Once deployed, it cannot be changed.
Leave empty to use an autogenerated name for the UDF.

Service

You can add Kubernetes annotations to the generated service here. Add one line per annotation as if it was in the service template (without indentation).
You can choose to deploy this service under a different identifier in the API nodes. This "deployed service id" will appear in the URL. Useful for static API nodes deployment to deploy several times the same service to a given infrastructure. Do not change once a deployment has been activated. Leave empty to use the original API service id ({{deploymentSettings.publishedServiceId}})
Additional properties about this deployment.

Additional environment variables

Environment variables to define for this deployment.
Definition of additional environment variables at deployment level is disabled for this infrastructure

Additional labels

Labels to add to this deployment.
Definition of additional labels at deployment level is disabled for this infrastructure

Additional annotations

Annotations to add to this deployment.
Definition of additional annotations at deployment level is disabled for this infrastructure

Authorization

These settings control how access control is done on the API
Authorization method for protecting the API endpoints

Preview of the authorization method settings from the API designer

Query through deployer

Group name Allow  
No group has authorization to query the endpoint through the deployer
{{perm.group}}
To manage groups go to DSS global administration.

Endpoints tuning

These settings allow you to control the performance tuning on a per-endpoint basis.
These settings override global settings defined at the infrastructure level.
Please refer to DSS documentation for details on these settings.
There is no endpoint configuration set up yet.
{{endpoint.endpointId}}
Unique identifier of the endpoint
Maximum number of allocated pipelines
Minimum number of allocated pipelines
Cruise number of allocated pipelines, minimum number of allocated pipelines before freed pipeline is destroyed for a new one Must be between minimum and maximum number of pipelines
Maximum number of queued requests when all pipelines are busy
Timeout for requests in milliseconds
SQL connection eviction time in milliseconds
SQL eviction interval in milliseconds
SQL max pooled connections

Audit logging

If you want to report queries from this deployment, you can specify a routing key to dispatch events from multiple deployments.
If unchecked, logging settings defined at the infrastructure level are used
It is necessary to enable the redirection of logs to standard output below in order to send logs to the Snowflake Event Table.
Event server is managed by Fleet Manager
Event server is managed by Fleet Manager but the Event server is not currently running
Automatically configure audit logging on the API nodes to send events for this deployment to a DSS Event Server.
Automatically configure audit logging on the API nodes to send events for this deployment to Kafka
Automatically configure audit logging on the API nodes to send events for this deployment through an object storage or a filesystem connection.
0 means flush after each message. Don't use that.
0 = no auto-flush on interval
JSON syntax error!
{{variablesEditor.vars.dkuJSONError}}