Overview
Generate images from text prompts using any of LiteLLM’s supported image generation providers. Returns responses in OpenAI format.
Function Signature
def image_generation (
prompt : str ,
model : Optional[ str ] = None ,
n : Optional[ int ] = None ,
quality : Optional[Union[ str , ImageGenerationRequestQuality]] = None ,
response_format : Optional[ str ] = None ,
size : Optional[ str ] = None ,
style : Optional[ str ] = None ,
user : Optional[ str ] = None ,
timeout : float = 600 ,
api_key : Optional[ str ] = None ,
api_base : Optional[ str ] = None ,
api_version : Optional[ str ] = None ,
custom_llm_provider : Optional[ str ] = None ,
** kwargs
) -> ImageResponse
Parameters
Required Parameters
Text description of the desired image(s). prompt = "A serene landscape with mountains and a lake at sunset"
Optional Parameters
The model to use for image generation. Examples:
dall-e-3 (OpenAI)
dall-e-2 (OpenAI)
stability-ai/stable-diffusion-xl-base-1.0 (Bedrock)
imagegeneration@006 (Vertex AI)
Number of images to generate. Note: dall-e-3 only supports n=1
Size of generated images. For DALL-E 3:
"1024x1024" (default)
"1792x1024"
"1024x1792"
For DALL-E 2:
"256x256"
"512x512"
"1024x1024"
Quality of the image (DALL-E 3 only). Options:
"standard": Standard quality
"hd": Higher quality, more detailed
Style of generated images (DALL-E 3 only). Options:
"vivid": Hyper-real and dramatic images
"natural": More natural, less hyper-real
Format of the returned image. Options:
"url": Returns a URL to the image
"b64_json": Returns base64 encoded JSON
Unique identifier for your end-user.
Request timeout in seconds (default 10 minutes).
API Configuration
API key for the provider.
Base URL for the API endpoint.
Override provider detection.
Response
ImageResponse
Unix timestamp of when the image was created.
List of generated image objects. URL of the generated image (if response_format=“url”).
Base64 encoded image (if response_format=“b64_json”).
The revised prompt used by DALL-E 3 (if applicable).
Usage Examples
Basic Image Generation
import litellm
response = litellm.image_generation(
prompt = "A cute baby sea otter" ,
model = "dall-e-3"
)
print (response.data[ 0 ].url)
High Quality Image
import litellm
response = litellm.image_generation(
prompt = "A futuristic cityscape at night" ,
model = "dall-e-3" ,
quality = "hd" ,
size = "1792x1024" ,
style = "vivid"
)
print (response.data[ 0 ].url)
print (response.data[ 0 ].revised_prompt) # See how DALL-E revised your prompt
Multiple Images (DALL-E 2)
import litellm
response = litellm.image_generation(
prompt = "A white siamese cat" ,
model = "dall-e-2" ,
n = 4 ,
size = "512x512"
)
for i, image in enumerate (response.data):
print ( f "Image { i + 1 } : { image.url } " )
Async Image Generation
import litellm
import asyncio
async def main ():
response = await litellm.aimage_generation(
prompt = "A serene mountain landscape" ,
model = "dall-e-3"
)
print (response.data[ 0 ].url)
asyncio.run(main())
Base64 Response
import litellm
import base64
from PIL import Image
from io import BytesIO
response = litellm.image_generation(
prompt = "A colorful abstract painting" ,
model = "dall-e-2" ,
response_format = "b64_json"
)
# Decode and save the image
image_data = base64.b64decode(response.data[ 0 ].b64_json)
image = Image.open(BytesIO(image_data))
image.save( "generated_image.png" )
Provider Examples
OpenAI DALL-E
import litellm
# DALL-E 3
response = litellm.image_generation(
prompt = "A photorealistic cat" ,
model = "dall-e-3" ,
size = "1024x1024" ,
quality = "hd"
)
# DALL-E 2
response = litellm.image_generation(
prompt = "A photorealistic cat" ,
model = "dall-e-2" ,
n = 2 ,
size = "512x512"
)
Azure OpenAI
import litellm
response = litellm.image_generation(
prompt = "A beautiful sunset" ,
model = "azure/dall-e-3" ,
api_key = "your-azure-key" ,
api_base = "https://your-endpoint.openai.azure.com/" ,
api_version = "2024-02-01"
)
AWS Bedrock
import litellm
response = litellm.image_generation(
prompt = "A majestic eagle in flight" ,
model = "bedrock/stability.stable-diffusion-xl-v1" ,
# Bedrock-specific params
width = 1024 ,
height = 1024 ,
cfg_scale = 7.0 ,
steps = 50
)
Vertex AI
import litellm
response = litellm.image_generation(
prompt = "A cyberpunk city" ,
model = "vertex_ai/imagegeneration@006" ,
# Vertex-specific params
number_of_images = 1 ,
aspect_ratio = "1:1"
)
Replicate
import litellm
response = litellm.image_generation(
prompt = "A fantasy castle" ,
model = "replicate/stability-ai/sdxl" ,
custom_llm_provider = "replicate"
)
Saving Images
Save from URL
import litellm
import requests
from PIL import Image
from io import BytesIO
response = litellm.image_generation(
prompt = "A peaceful garden" ,
model = "dall-e-3"
)
# Download and save
image_url = response.data[ 0 ].url
image_response = requests.get(image_url)
image = Image.open(BytesIO(image_response.content))
image.save( "generated_image.png" )
Save from Base64
import litellm
import base64
response = litellm.image_generation(
prompt = "An abstract artwork" ,
model = "dall-e-2" ,
response_format = "b64_json"
)
# Save directly
with open ( "image.png" , "wb" ) as f:
f.write(base64.b64decode(response.data[ 0 ].b64_json))
Error Handling
import litellm
from litellm import (
BadRequestError,
AuthenticationError,
RateLimitError,
ContentPolicyViolationError
)
try :
response = litellm.image_generation(
prompt = "A beautiful landscape" ,
model = "dall-e-3"
)
except ContentPolicyViolationError as e:
print ( f "Content policy violation: { e } " )
except AuthenticationError as e:
print ( f "Authentication failed: { e } " )
except RateLimitError as e:
print ( f "Rate limit exceeded: { e } " )
except BadRequestError as e:
print ( f "Bad request: { e } " )
except Exception as e:
print ( f "An error occurred: { e } " )
Supported Providers
LiteLLM supports image generation from:
OpenAI : DALL-E 2, DALL-E 3
Azure OpenAI : DALL-E 2, DALL-E 3
AWS Bedrock : Stable Diffusion, Titan Image Generator
Google Vertex AI : Imagen 2, Imagen 3
Replicate : Stable Diffusion XL, Flux, and more
Together AI : Various Stable Diffusion models
See Image Generation Providers for the complete list.