Skip to main content

GET /api/models

Retrieves the list of available Ollama models for script generation and the default model to use.

Response

status
string
"success" if models were retrieved successfully, "error" if Ollama is not running or unreachable.
models
array
Array of available model names.Example: ["llama3.1:8b", "llama3.1:70b", "mistral:7b"]
default
string
The default model name to use for generation.Example: "llama3.1:8b"
message
string
Only present on error responses. Describes why model fetching failed.

Example Request

curl http://localhost:8080/api/models

Example Response (Success)

{
  "status": "success",
  "models": [
    "llama3.1:8b",
    "llama3.1:70b",
    "mistral:7b",
    "phi3:mini"
  ],
  "default": "llama3.1:8b"
}

Example Response (Error)

When Ollama is not running or unreachable:
{
  "status": "error",
  "message": "Could not fetch Ollama models. Is Ollama running?",
  "models": [
    "llama3.1:8b"
  ],
  "default": "llama3.1:8b"
}
Even on error, the response includes a fallback model list from the OLLAMA_MODEL environment variable.

Usage

The models returned by this endpoint can be used in the aiModel field when creating a generation job:
import requests

# Get available models
models_response = requests.get("http://localhost:8080/api/models")
default_model = models_response.json()["default"]

# Use in generation request
requests.post(
    "http://localhost:8080/api/generate",
    json={
        "videoSubject": "AI technology",
        "aiModel": default_model,  # Use the default model
        "paragraphNumber": 3
    }
)

Model Selection

Performance vs Quality Trade-off
  • Smaller models (e.g., llama3.1:8b, phi3:mini) are faster but may produce simpler scripts
  • Larger models (e.g., llama3.1:70b) produce higher quality scripts but take longer
  • The default model is usually a good balance for most use cases

Troubleshooting

If the endpoint returns an error:
  1. Check if Ollama is running:
    ollama list
    
  2. Verify the Ollama server is accessible:
    curl http://localhost:11434/api/tags
    
  3. Check the OLLAMA_MODEL environment variable in .env:
    OLLAMA_MODEL=llama3.1:8b
    
See the Ollama Models Guide for more details.

Build docs developers (and LLMs) love