Overview
The Documents API endpoint provides access to all documents (PDFs, Word files, spreadsheets, etc.) uploaded to the Wagtail CMS. You can retrieve document metadata and download links.
Endpoint
List All Documents
Retrieve a list of all documents:
curl http://localhost:8000/api/v2/documents/
Response
{
"meta": {
"total_count": 8
},
"items": [
{
"id": 1,
"meta": {
"type": "wagtaildocs.Document",
"detail_url": "http://localhost:8000/api/v2/documents/1/"
},
"title": "Bakery Menu 2024"
}
]
}
Get Single Document
Retrieve a specific document by ID:
curl http://localhost:8000/api/v2/documents/1/
Response
{
"id": 1,
"meta": {
"type": "wagtaildocs.Document",
"detail_url": "http://localhost:8000/api/v2/documents/1/",
"download_url": "http://localhost:8000/documents/1/bakery_menu_2024.pdf"
},
"title": "Bakery Menu 2024"
}
Query Parameters
Comma-separated list of fields to include. Use * for all fields.
Number of results per page (max 100)
Number of results to skip
Field to order by. Prefix with - for descending (e.g., -created_at)
Search query to filter documents by title and tags
Response Fields
Default Fields
Unique identifier for the document
Metadata about the documentThe document model type (typically wagtaildocs.Document)
API URL for the full document details
URL to download the document file
All Fields
Request all fields using ?fields=*:
curl "http://localhost:8000/api/v2/documents/1/?fields=*"
Complete Response
{
"id": 1,
"meta": {
"type": "wagtaildocs.Document",
"detail_url": "http://localhost:8000/api/v2/documents/1/",
"download_url": "http://localhost:8000/documents/1/bakery_menu_2024.pdf"
},
"title": "Bakery Menu 2024",
"file": "/media/documents/bakery_menu_2024.pdf",
"file_size": 1048576,
"file_hash": "xyz789abc123",
"created_at": "2024-01-15T10:30:00Z",
"uploaded_by_user": null
}
Relative path to the document file
Hash of the document file for duplicate detection
ISO 8601 timestamp when the document was uploaded
Information about the user who uploaded the document (if available)
Downloading Documents
The download_url field in the meta object provides a direct link to download the document:
# Get document metadata
curl http://localhost:8000/api/v2/documents/1/ > document.json
# Extract download URL and download file
DOWNLOAD_URL=$(jq -r '.meta.download_url' document.json)
curl -O "$DOWNLOAD_URL"
Common Use Cases
List Recent Documents
Get the most recently uploaded documents:
curl "http://localhost:8000/api/v2/documents/?fields=*&order=-created_at&limit=10"
Search Documents
Find documents by title:
curl "http://localhost:8000/api/v2/documents/?search=menu"
Retrieve full document information including file size and upload date:
curl "http://localhost:8000/api/v2/documents/1/?fields=title,file_size,created_at,file"
File Types
Wagtail supports various document formats:
| Category | File Types |
|---|
| PDF | .pdf |
| Word | .doc, .docx |
| Excel | .xls, .xlsx |
| PowerPoint | .ppt, .pptx |
| Text | .txt, .rtf |
| Archive | .zip, .tar, .gz |
| Other | Any file type allowed by configuration |
The allowed file types can be configured in Wagtail’s settings. Check with your administrator for the specific file types supported in your installation.
The file_size field returns the size in bytes. Convert to human-readable format:
def format_file_size(bytes):
"""Convert bytes to human-readable format"""
for unit in ['B', 'KB', 'MB', 'GB']:
if bytes < 1024:
return f"{bytes:.2f} {unit}"
bytes /= 1024
return f"{bytes:.2f} TB"
# Example
response = requests.get('http://localhost:8000/api/v2/documents/1/?fields=file_size')
file_size = response.json()['file_size']
print(format_file_size(file_size)) # Output: 1.00 MB
Iterate through all documents:
import requests
def get_all_documents():
"""Fetch all documents using pagination"""
documents = []
offset = 0
limit = 50
while True:
response = requests.get(
'http://localhost:8000/api/v2/documents/',
params={'limit': limit, 'offset': offset, 'fields': '*'}
)
data = response.json()
documents.extend(data['items'])
# Break if we've fetched all documents
if len(data['items']) < limit:
break
offset += limit
return documents
all_docs = get_all_documents()
print(f"Total documents: {len(all_docs)}")
Best Practices
Check File Size Before Download
For large files, check the file size before downloading:
import requests
response = requests.get('http://localhost:8000/api/v2/documents/1/?fields=file_size')
file_size_mb = response.json()['file_size'] / (1024 * 1024)
if file_size_mb > 10:
print(f"Warning: Large file ({file_size_mb:.2f} MB)")
Use Specific Fields
Only request the fields you need:
# Good: Request specific fields
curl "http://localhost:8000/api/v2/documents/?fields=title,file_size,created_at"
# Avoid: Requesting all fields when you don't need them
curl "http://localhost:8000/api/v2/documents/?fields=*"
Handle Download Errors
Always handle potential download errors:
import requests
try:
doc_response = requests.get('http://localhost:8000/api/v2/documents/1/')
doc_response.raise_for_status()
download_url = doc_response.json()['meta']['download_url']
file_response = requests.get(download_url)
file_response.raise_for_status()
with open('document.pdf', 'wb') as f:
f.write(file_response.content)
except requests.exceptions.RequestException as e:
print(f"Download failed: {e}")
Next Steps