{{printWeightedIfWeighted()}} {{modelData.modeling.metrics.evaluationMetric | mlMetricName : modelData.modeling.metrics }}
Warning : your model has a large number of classes and the data was downsampled to 30k records to compute this metric.
Model evaluation
Model evaluation ID
Evaluation ID
Evaluate recipe configuration
Skip scoring true
Skip performance metrics computation {{ evaluationDetails.evaluation.evaluateRecipeParams.dontComputePerformance }}
Is proba aware {{ evaluationDetails.evaluation.evaluateRecipeParams.isProbaAware }}
Feature type
  • {{ feature.name }} {{ feature.role | lowercase }} : {{ feature.type | lowercase }}
  • ... {{ evaluationDetails.evaluation.evaluateRecipeParams.features.length - 4 }} other features
  • {{ feature.name }} {{ feature.role | lowercase }} : {{ feature.type | lowercase }}
Class definition
  • {{ class }}
  • ... {{ evaluationDetails.evaluation.evaluateRecipeParams.classes.length - 4 }} other classes
  • {{ class }}
Probability mapping
  • {{ proba.key }} {{ proba.value }}
  • ... {{ evaluationDetails.evaluation.evaluateRecipeParams.probas.length - 4 }} other probability mappings
  • {{ proba.key }} {{ proba.value }}
Model
Model ID
Model type
{{ isCausalPrediction() | targetRoleName | capitalize }}
Treatment
Control value
Classes
{{ modelClass }}(preferred class)
Backend
Algorithm {{ modelData.modeling.meta_learner | niceConst: '-' }} | {{ modelData.modeling.algorithm | niceConst }}
Trained on
Columns
Data set rows
Train set rows
Test set rows
Number of custom folds
Number of folds
Weighting method
Sample weights variable
Time variable
Calibration method
Epochs scheduled
Epochs trained
Epochs till Best Model {{modelData.actualParams.resolved.keras.keptModelEpoch + 1}}
Evaluated model
Forecast horizon {{ prettyTimeSteps(modelData.coreParams.predictionLength * modelData.coreParams.timestepParams.numberOfTimeunits, modelData.coreParams.timestepParams.timeunit) }}
Code Env
Python version
Model ID
Algorithm Imported from MLflow External Model
Protocol {{modelData.proxyModelConfiguration.protocol}}
Input Format {{ modelData.inputFormatDisplayName }}
Output Format {{ modelData.outputFormatDisplayName }}
Connection
Model type
{{ isCausalPrediction() | targetRoleName | capitalize }}
Classes
{{ modelClass }}
Evaluated model
Imported Created
From Databricks Connection {{modelData.mlflowOrigin.connectionName}}
From Databricks Source Model Registry Unity Catalog
From Databricks Model Name {{modelData.mlflowOrigin.modelName}}
From Databricks Model Version {{modelData.mlflowOrigin.modelVersion}}
From Experiment {{modelData.mlflowOrigin.experimentId}}
From Run {{modelData.mlflowOrigin.runId}}
From Artifact {{modelData.mlflowOrigin.artifactURI}}{{modelData.mlflowOrigin.modelSubfolder ? '/' + modelData.mlflowOrigin.modelSubfolder: ''}}
Code Env {{ modelData.coreParams.executionParams.envName || 'DSS builtin env' }}
The model may not work with the configured code environment: {{ modelData.mlFlowCompatibilityInfo.reasons.join(' ') }}.
{{ modelData.mlFlowCompatibilityInfo.reasons.join(' ') }}.
Selected code environment does not meet requirements: {{ modelData.mlFlowCompatibilityInfo.reasons.join(' ') }}.
Python version
{{hasNoAssociatedModelText()}}
No model was referenced in the evaluation recipe configuration.
Evaluation dataset
Evaluated dataset
Sample row count
Partitions
  • {{ partition }}
Partition count
Metrics
{{cur.label}} {{uiState.$formattedMetrics[cur.code]}}
{{customMetricResult.name}}
Evaluation diagnostics
{{message}}
Nothing to report
Metadata
Optional. Informative labels for the model. The model:algorithm, model:meta-learner, model:date, model:name, trainDataset:dataset-name, testDataset:dataset-name labels, evaluation:date and evaluationDataset:dataset-name are automatically added.
Metadata
Optional. Informative labels for the model evaluation.
  • Dataset
  • Model
  • Evaluation
  • Custom