Skip to main content
Retrieve detailed information about a model including its Modelfile, parameters, template, system prompt, and model architecture details.

Request

Endpoint

POST /api/show

Request Body

model
string
required
Name of the model to show information for
verbose
boolean
default:"false"
Include tensor information in the response (significantly increases response size)
system
string
Override the system prompt for display purposes
template
string
Deprecated: This parameter is deprecated
options
object
Model options to display

Response

Response Fields

modelfile
string
Complete Modelfile that can be used to recreate the model
parameters
string
Formatted string of model parameters (one per line)
template
string
Prompt template used by the model
system
string
System prompt configured for the model
details
object
Model architecture and metadata
model_info
object
Low-level model metadata from GGUF/config (key-value pairs)
projector_info
object
Vision projector metadata (for multimodal models)
messages
array
Pre-configured messages embedded in the model
license
string
License information for the model
capabilities
array
Array of capabilities (e.g., [“completion”, “vision”, “tools”])
modified_at
string
Timestamp of last modification (ISO 8601)
remote_model
string
Upstream model name (for remote models)
remote_host
string
Upstream Ollama host URL (for remote models)
renderer
string
Template renderer used by the model
parser
string
Output parser used by the model
requires
string
Minimum Ollama version required
tensors
array
Tensor information (only when verbose: true)

Examples

Basic Model Information

curl http://localhost:11434/api/show -d '{
  "model": "llama3.2"
}'

Example Response

{
  "modelfile": "# Modelfile generated by \"ollama show\"\n# To build a new Modelfile based on this, replace FROM with:\n# FROM llama3.2\n\nFROM /path/to/model\nTEMPLATE \"\"\"{{ if .System }}<|start_header_id|>system<|end_header_id|>\n\n{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>\n\n{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>\n\n{{ .Response }}<|eot_id|>\"\"\"\nPARAMETER stop <|start_header_id|>\nPARAMETER stop <|end_header_id|>\nPARAMETER stop <|eot_id|>\n",
  "parameters": "stop                          <|start_header_id|>\nstop                          <|end_header_id|>\nstop                          <|eot_id|>",
  "template": "{{ if .System }}<|start_header_id|>system<|end_header_id|>\n\n{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>\n\n{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>\n\n{{ .Response }}<|eot_id|>",
  "details": {
    "parent_model": "",
    "format": "gguf",
    "family": "llama",
    "families": ["llama"],
    "parameter_size": "3B",
    "quantization_level": "Q4_K_M"
  },
  "capabilities": ["completion"],
  "model_info": {
    "general.architecture": "llama",
    "general.file_type": "Q4_K_M",
    "general.parameter_count": 3213052928,
    "llama.attention.head_count": 32,
    "llama.attention.head_count_kv": 8,
    "llama.block_count": 28,
    "llama.context_length": 131072,
    "llama.embedding_length": 3072
  },
  "modified_at": "2024-02-24T12:34:56.789Z"
}

Verbose Mode (Including Tensors)

curl http://localhost:11434/api/show -d '{
  "model": "llama3.2",
  "verbose": true
}'

Extract Specific Information (Python)

import requests

def get_model_context_length(model_name):
    response = requests.post(
        'http://localhost:11434/api/show',
        json={'model': model_name}
    )
    info = response.json()
    
    # Try to extract context length from model_info
    model_info = info.get('model_info', {})
    family = info['details']['family']
    
    context_key = f"{family}.context_length"
    return model_info.get(context_key, 'Unknown')

context = get_model_context_length('llama3.2')
print(f"Context length: {context}")

Display Modelfile (JavaScript)

const response = await fetch('http://localhost:11434/api/show', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({ model: 'llama3.2' })
});

const info = await response.json();
console.log('=== Modelfile ===');
console.log(info.modelfile);

Error Responses

error
string
Description of the error

Common Errors

  • 400 Bad Request: Invalid model name or request format
  • 404 Not Found: Model not found
  • 500 Internal Server Error: Error reading model information
The modelfile field contains a complete, regenerated Modelfile that can be used with ollama create to recreate the model.
Setting verbose: true will include detailed tensor information, which can make the response very large (hundreds of MB for large models).

Build docs developers (and LLMs) love