Creates an edited or extended image given one or more source images and a prompt.This endpoint supports GPT Image models (gpt-image-1.5, gpt-image-1, gpt-image-1-mini, and chatgpt-image-latest) and dall-e-2.
An additional image whose fully transparent areas (e.g. where alpha is zero) indicate where image should be edited. If there are multiple images provided, the mask will be applied on the first image. Must be a valid PNG file, less than 4MB, and have the same dimensions as image.
The format in which the generated images are returned. Must be one of url or b64_json.Only supported for dall-e-2 (default is url). GPT image models always return base64-encoded images.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. Only supported for gpt-image-1 and gpt-image-1.5 and later models, unsupported for gpt-image-1-mini:
The number of partial images to generate. Used for streaming responses. Value must be between 0 and 3. When set to 0, the response will be a single image sent in one streaming event.
from openai import OpenAIclient = OpenAI()response = client.images.edit( model="gpt-image-1", image=[ open("image1.png", "rb"), open("image2.png", "rb"), open("image3.png", "rb") ], prompt="Apply a vintage filter to these images", input_fidelity="high")import base64for i, image in enumerate(response.data): image_data = base64.b64decode(image.b64_json) with open(f"edited_{i}.png", "wb") as f: f.write(image_data)