GET /api/models
Retrieves the list of available Ollama models for script generation and the default model to use.Response
"success" if models were retrieved successfully, "error" if Ollama is not running or unreachable.Array of available model names.Example:
["llama3.1:8b", "llama3.1:70b", "mistral:7b"]The default model name to use for generation.Example:
"llama3.1:8b"Only present on error responses. Describes why model fetching failed.
Example Request
Example Response (Success)
Example Response (Error)
When Ollama is not running or unreachable:Even on error, the response includes a fallback model list from the
OLLAMA_MODEL environment variable.Usage
The models returned by this endpoint can be used in theaiModel field when creating a generation job:
Model Selection
Troubleshooting
If the endpoint returns an error:-
Check if Ollama is running:
-
Verify the Ollama server is accessible:
-
Check the
OLLAMA_MODELenvironment variable in.env: