Skip to main content

Method Signature

def generate_content(
    self,
    *,
    model: str,
    contents: ContentListUnionDict,
    config: Optional[GenerateContentConfigOrDict] = None,
) -> GenerateContentResponse
async def generate_content(
    self,
    *,
    model: str,
    contents: ContentListUnionDict,
    config: Optional[GenerateContentConfigOrDict] = None,
) -> GenerateContentResponse

Description

Makes an API request to generate content using a Gemini model. Supports text-only and multimodal input/output, including images, audio, and video. The method includes built-in support for automatic function calling (AFC) when tools are provided. MCP (Model Context Protocol) support is available as an experimental feature.

Parameters

model
str
required
The model to use for generation.Vertex AI formats:
  • Model ID: 'gemini-2.0-flash'
  • Full resource name: 'projects/my-project/locations/us-central1/publishers/google/models/gemini-2.0-flash'
  • Partial resource name: 'publishers/google/models/gemini-2.0-flash'
  • Publisher/model: 'google/gemini-2.0-flash'
Gemini API formats:
  • Model ID: 'gemini-2.0-flash'
  • Model name: 'models/gemini-2.0-flash'
  • Tuned model: 'tunedModels/1234567890123456789'
contents
ContentListUnionDict
required
The conversation history or input prompt to generate content from.Can be:
  • A string: 'What is your name?'
  • A list of Content objects
  • A list of Part objects
  • Mixed content with text, images, video, and audio
config
GenerateContentConfig
Configuration for content generation.

Response

candidates
list[Candidate]
List of generated response candidates
usage_metadata
UsageMetadata
Token usage information
prompt_feedback
PromptFeedback
Feedback about the prompt (e.g., safety blocks)
model_version
str
The model version used
text
str
Convenience property: text from the first candidate

Code Examples

Basic Text Generation

from google import genai

client = genai.Client(api_key='your-api-key')

response = client.models.generate_content(
    model='gemini-2.0-flash',
    contents='What is a good name for a flower shop?'
)

print(response.text)
# Output: **Elegant & Classic:**
# * The Dried Bloom
# * Everlasting Florals

Multimodal Generation (Image Input)

from google import genai
from google.genai import types

client = genai.Client(vertexai=True, project='my-project', location='us-central1')

response = client.models.generate_content(
    model='gemini-2.0-flash',
    contents=[
        types.Part.from_text('What is shown in this image?'),
        types.Part.from_uri(
            file_uri='gs://generativeai-downloads/images/scones.jpg',
            mime_type='image/jpeg'
        )
    ]
)

print(response.text)
# Output: The image shows freshly baked blueberry scones.

Structured JSON Output

response = client.models.generate_content(
    model='gemini-2.0-flash',
    contents='List 3 popular cookie recipes',
    config={
        'response_mime_type': 'application/json',
        'response_schema': {
            'type': 'object',
            'properties': {
                'recipes': {
                    'type': 'array',
                    'items': {
                        'type': 'object',
                        'properties': {
                            'name': {'type': 'string'},
                            'ingredients': {'type': 'array', 'items': {'type': 'string'}}
                        }
                    }
                }
            }
        }
    }
)

print(response.text)
# Output: {"recipes": [{"name": "Chocolate Chip", ...}]}

Function Calling (Automatic)

from google.genai import types

def get_weather(location: str) -> str:
    """Get the weather for a location."""
    return f"Sunny, 72°F in {location}"

response = client.models.generate_content(
    model='gemini-2.0-flash',
    contents='What is the weather in San Francisco?',
    config=types.GenerateContentConfig(
        tools=[get_weather],
    )
)

print(response.text)
# The function is automatically called and the result is used
# Output: The weather in San Francisco is sunny and 72°F.

Async Usage

import asyncio
from google import genai

client = genai.Client(api_key='your-api-key')

async def generate():
    response = await client.aio.models.generate_content(
        model='gemini-2.0-flash',
        contents='Write a haiku about programming'
    )
    print(response.text)

asyncio.run(generate())

Notes

  • The method automatically handles function calling when Python callables are provided in tools
  • MCP (Model Context Protocol) sessions are only supported in async methods
  • Use generate_content_stream for streaming responses
  • Multimodal input is supported for Gemini 2.0 and later models
  • Some configuration options are only available on Vertex AI or Gemini API

Build docs developers (and LLMs) love