Method Signature
def generate_content (
self ,
* ,
model : str ,
contents : ContentListUnionDict,
config : Optional[GenerateContentConfigOrDict] = None ,
) -> GenerateContentResponse
async def generate_content (
self ,
* ,
model : str ,
contents : ContentListUnionDict,
config : Optional[GenerateContentConfigOrDict] = None ,
) -> GenerateContentResponse
Description
Makes an API request to generate content using a Gemini model. Supports text-only and multimodal input/output, including images, audio, and video.
The method includes built-in support for automatic function calling (AFC) when tools are provided. MCP (Model Context Protocol) support is available as an experimental feature.
Parameters
The model to use for generation. Vertex AI formats:
Model ID: 'gemini-2.0-flash'
Full resource name: 'projects/my-project/locations/us-central1/publishers/google/models/gemini-2.0-flash'
Partial resource name: 'publishers/google/models/gemini-2.0-flash'
Publisher/model: 'google/gemini-2.0-flash'
Gemini API formats:
Model ID: 'gemini-2.0-flash'
Model name: 'models/gemini-2.0-flash'
Tuned model: 'tunedModels/1234567890123456789'
contents
ContentListUnionDict
required
The conversation history or input prompt to generate content from. Can be:
A string: 'What is your name?'
A list of Content objects
A list of Part objects
Mixed content with text, images, video, and audio
Configuration for content generation. System instructions to guide model behavior
Controls randomness in output (0.0 to 2.0). Higher values increase creativity.
Nucleus sampling threshold (0.0 to 1.0)
Number of response candidates to generate
Maximum number of tokens in the response
Sequences that stop generation when encountered
MIME type for structured output (e.g., 'application/json')
Schema for structured JSON output
JSON schema for structured output
Safety filter configurations
Function calling tools or code execution tools
Configuration for tool usage
Name of cached content to use
Desired modalities in response (e.g., ['TEXT', 'IMAGE'])
Configuration for thinking process (experimental)
Response
List of generated response candidates Show Candidate properties
Why generation stopped (e.g., 'STOP', 'MAX_TOKENS', 'SAFETY')
Safety ratings for the candidate
Citation information for grounded content
Number of tokens in the candidate
Token usage information cached_content_token_count
Tokens from cached content
Feedback about the prompt (e.g., safety blocks)
Convenience property: text from the first candidate
Code Examples
Basic Text Generation
from google import genai
client = genai.Client( api_key = 'your-api-key' )
response = client.models.generate_content(
model = 'gemini-2.0-flash' ,
contents = 'What is a good name for a flower shop?'
)
print (response.text)
# Output: **Elegant & Classic:**
# * The Dried Bloom
# * Everlasting Florals
from google import genai
from google.genai import types
client = genai.Client( vertexai = True , project = 'my-project' , location = 'us-central1' )
response = client.models.generate_content(
model = 'gemini-2.0-flash' ,
contents = [
types.Part.from_text( 'What is shown in this image?' ),
types.Part.from_uri(
file_uri = 'gs://generativeai-downloads/images/scones.jpg' ,
mime_type = 'image/jpeg'
)
]
)
print (response.text)
# Output: The image shows freshly baked blueberry scones.
Structured JSON Output
response = client.models.generate_content(
model = 'gemini-2.0-flash' ,
contents = 'List 3 popular cookie recipes' ,
config = {
'response_mime_type' : 'application/json' ,
'response_schema' : {
'type' : 'object' ,
'properties' : {
'recipes' : {
'type' : 'array' ,
'items' : {
'type' : 'object' ,
'properties' : {
'name' : { 'type' : 'string' },
'ingredients' : { 'type' : 'array' , 'items' : { 'type' : 'string' }}
}
}
}
}
}
}
)
print (response.text)
# Output: {"recipes": [{"name": "Chocolate Chip", ...}]}
Function Calling (Automatic)
from google.genai import types
def get_weather ( location : str ) -> str :
"""Get the weather for a location."""
return f "Sunny, 72°F in { location } "
response = client.models.generate_content(
model = 'gemini-2.0-flash' ,
contents = 'What is the weather in San Francisco?' ,
config = types.GenerateContentConfig(
tools = [get_weather],
)
)
print (response.text)
# The function is automatically called and the result is used
# Output: The weather in San Francisco is sunny and 72°F.
Async Usage
import asyncio
from google import genai
client = genai.Client( api_key = 'your-api-key' )
async def generate ():
response = await client.aio.models.generate_content(
model = 'gemini-2.0-flash' ,
contents = 'Write a haiku about programming'
)
print (response.text)
asyncio.run(generate())
Notes
The method automatically handles function calling when Python callables are provided in tools
MCP (Model Context Protocol) sessions are only supported in async methods
Use generate_content_stream for streaming responses
Multimodal input is supported for Gemini 2.0 and later models
Some configuration options are only available on Vertex AI or Gemini API