You do not yet have any connection for {{dataset.type|datasetTypeToName}}. You need to create a new connection before creating datasets on this connection.
Leave blank to use the Project ID associated with the connection Leave blank to use the Snowflake database associated with the connection Leave blank to use the MS SQL Server database associated with the connection Leave blank to use the Databricks catalog associated with the connection Leave blank to use the catalog associated with the connection
Normalize floating point values (force '42' to '42.0')

Teradata settings

Greenplum settings

For partitioned datasets, the partitioning dimensions are always prepended to the sort keys
Use an hash based distribution
(Azure documentation on distributed tables)
Forbids writing to the BigQuery table if the dataset is partitioned and there is a mismatch with the BigQuery table partitioning configuration.
Partitioning configuration mismatch between the dataset and the BigQuery native partitioning.
Partitioning configuration mismatch between the dataset and the BigQuery native partitioning. Writing to the table will fail as it is forbidden. Configure BigQuery native partitioning or disable Partitioning consistency.
(inclusive)
(exclusive)
(must be a positive integer)
If the BigQuery table(s) requires a partition filter, and errors occur when listing partitions, enable this. This may increase costs.
Note: The $DKU_CREATE_TABLE_FIELDS will be replaced by the list of fields and types from the schema
These statements will be executed before Data Science Studio writes data to the table. You might want to remove temporarily some indices for example
These statements will be executed after Data Science Studio writes data to the table. You might want to create some indices for example

SQL Spark integration