Resources control

Concurrent jobs and activities

The maximum number of concurrent jobs. If more jobs are started, they will be pending. Use 0 to rely on activity limits only.
The max jobs value is high. This may reduce performance if too many jobs run concurrently and if the instance is not correctly sized.
There is no limit in the number of jobs. This may reduce performance if too many jobs run concurrently and if the instance is not correctly sized.
The max number of jobs is over the max number of activities and will be limited by to {{generalSettings.maxRunningActivities}}.
The maximum number of concurrent activities across all jobs. Use 0 for "unlimited" (other limits still apply).
The maximum number of concurrent activities for each single job. Must be > 0.

Additional limits

Define here additional limits using a key/value syntax. The key can be a custom string to add limits on plugin recipes. It can also can be formatted using a "category/item" pattern to add limits on well known categories such as 'user', 'project', 'recipeType' and 'tag'. For example, to limit the number of concurrent activities triggered by 'john' to 2, add a new key 'user/john' with a corresponding value of 2.

Mail attachments limits

These settings control the maximum size of files attached by DSS to mails.

cgroups

These settings control how user processes are placed into cgroups for resource control
Depends on your OS

cgroups placements

In-memory machine learning

Path in which to place this kind of process. Can use ${user} and ${projectKey}
Only one target can be specified with CGroups V2

Python + R recipes

Path in which to place this kind of process. Can use ${user} and ${projectKey}
Only one target can be specified with CGroups V2

PySpark + SparkR + Sparklyr recipes

Path in which to place this kind of process. Can use ${user} and ${projectKey}
Only one target can be specified with CGroups V2

Scenarios (Python steps & triggers)

Path in which to place this kind of process. Can use ${user} and ${projectKey}
Only one target can be specified with CGroups V2

Jupyter kernels (Python, R, Scala)

Path in which to place this kind of process. Can use ${user} and ${projectKey}
Only one target can be specified with CGroups V2

In-memory ML recipes (train, score)

Path in which to place this kind of process. Can use ${user} and ${projectKey}
Only one target can be specified with CGroups V2

Python macros

Path in which to place this kind of process. Can use ${user} and ${projectKey}
Only one target can be specified with CGroups V2

RMarkdown builders

Path in which to place this kind of process. Can use ${user} and ${projectKey}
Only one target can be specified with CGroups V2

Webapp backends

Path in which to place this kind of process. Can use ${user} and ${projectKey}
Only one target can be specified with CGroups V2

Metrics & Checks

Path in which to place this kind of process. Can use ${user} and ${projectKey}
Only one target can be specified with CGroups V2

Interactive statistics

Path in which to place this kind of process. Can use ${user} and ${projectKey}
Only one target can be specified with CGroups V2

In-memory statistics recipes (incl. PCA)

Path in which to place this kind of process. Can use ${user} and ${projectKey}
Only one target can be specified with CGroups V2

Deployment hooks

Path in which to place this kind of process. Can use ${user} and ${projectKey}
Only one target can be specified with CGroups V2

Dev lambda server

Path in which to place this kind of process. Can use ${user} and ${projectKey}
Only one target can be specified with CGroups V2

Custom Python data access components (FS providers, datasets, exporters, formats)

Path in which to place this kind of process. Can use ${user} and ${projectKey}
Only one target can be specified with CGroups V2

Project standards

Path in which to place this kind of process. Can use ${user} and ${projectKey}
Only one target can be specified with CGroups V2

cgroups limits

Explore / Prepare memory limits

You should not usually modify these settings without input from Dataiku Support.
These settings control the maximum size in memory of data samples taken for explore and prepare screens, and the maximum size in memory of data during and after processing by a prepare script (either in analysis or a preparation recipe)
These can be further reduced (but not increased) on a per-project basis.

SQL Result Set memory limits

You should not usually modify these settings without input from Dataiku Support.
These settings control the maximum size in memory of data retrieved from SQL result sets in a SQL notebook or a SQL scenario step.

Job Execution Kernels

You should not usually modify these settings without input from Dataiku Support.
These are advanced settings for tuning the processes that run jobs
The minimal number of always-available Job Execution Kernels. Please read the documentation for more details.
Path in which to place this kind of process. Can use ${user} and ${projectKey}
Only one target can be specified with CGroups V2