Controls reasoning depth: low values for faster, simpler answers, high values for deeper, complex analysis. (Note:
Support for this setting varies by model.)
Specify a custom value. This is model-specific, so please refer to the provider's documentation for the correct
format and range.
Maximum tokens to output per row (in English, 1 word ~ 1.37 tokens)
Parameter governing the randomness in the responses, 0 for more predictable, 0.5 to 1.0 for balanced creativity.
Number of tokens to pick from when sampling.
Sample from the top tokens whose probabilities add up to p.
Penalty applied on the next token proportional to how many times that token already appeared
in the response and prompt.
Penalty to apply on repeated tokens, no matter how many times the token already appeared in
the response and prompt.
Request the model to generate a valid JSON response, for models that support it.
⚠️ Must be a valid JSON string. Invalid JSON won't be saved.
Define the expected JSON output. If you leave this blank, the model will return valid JSON without a guaranteed structure
Force the model to adhere to the schema (for OpenAI models only)
Attempt to adjust the schema to improve compatibility across different providers.