ChatInterface is Gradio’s high-level abstraction specifically designed for creating chatbot UIs. It provides a simple way to wrap chat functions and automatically handles message history, user interactions, and streaming responses.
What is ChatInterface?
ChatInterface is similar to Interface but specialized for conversational AI applications. It automatically provides:
- A chat display with message history
- An input textbox (or multimodal input)
- Submit, retry, undo, and clear buttons
- Example prompts
- Streaming support for real-time responses
Basic usage
The simplest chatbot requires just a function that takes a message and history:
import gradio as gr
def echo(message, history):
return message
demo = gr.ChatInterface(fn=echo)
demo.launch()
Your function receives:
message: The user’s current message (string)
history: List of previous messages in OpenAI format
And should return:
- A string response, or
- A Gradio component (like an image), or
- A complete message dict
Message history format
The history parameter is a list of OpenAI-style message dictionaries:
[
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Hi! How can I help?"},
{"role": "user", "content": "What's the weather?"}
]
You can use this history to maintain context across multiple turns:
import gradio as gr
import random
def chatbot(message, history):
# Access previous messages
if len(history) > 0:
prev_msg = history[-1]["content"]
return f"You just said: {prev_msg}. Now you're saying: {message}"
return f"First message: {message}"
demo = gr.ChatInterface(fn=chatbot)
demo.launch()
Streaming responses
For real-time streaming, make your function a generator:
import gradio as gr
import time
def slow_echo(message, history):
response = ""
for char in message:
response += char
time.sleep(0.05)
yield response
demo = gr.ChatInterface(fn=slow_echo)
demo.launch()
ChatInterface automatically detects generator functions and displays responses as they stream.
Multimodal chat
Enable file uploads with multimodal=True:
import gradio as gr
def multimodal_chat(message, history):
# message is now a dict with "text" and "files" keys
text = message["text"]
files = message.get("files", [])
response = f"You said: {text}"
if files:
response += f"\nYou uploaded {len(files)} file(s)"
return response
demo = gr.ChatInterface(
fn=multimodal_chat,
multimodal=True
)
demo.launch()
When multimodal=True, the input message becomes a dictionary:
{
"text": "Check out this image",
"files": ["path/to/uploaded/file.jpg"]
}
Examples
Provide example prompts to help users get started:
import gradio as gr
demo = gr.ChatInterface(
fn=chatbot,
examples=[
{"text": "Hello!"},
{"text": "How are you?"},
{"text": "Tell me a joke"}
],
example_labels=["Greeting", "Question", "Fun"],
example_icons=["👋", "❓", "😄"]
)
For multimodal examples, include files:
demo = gr.ChatInterface(
fn=multimodal_chat,
multimodal=True,
examples=[
{"text": "Describe this image", "files": ["example.jpg"]},
{"text": "What's in this photo?", "files": ["photo.png"]}
]
)
Add configuration parameters that appear in a collapsible section:
import gradio as gr
def chatbot(message, history, temperature, max_tokens):
# Use temperature and max_tokens in your logic
return f"Processing with temp={temperature}, max_tokens={max_tokens}"
demo = gr.ChatInterface(
fn=chatbot,
additional_inputs=[
gr.Slider(0, 2, value=0.7, label="Temperature"),
gr.Slider(1, 2048, value=512, label="Max Tokens")
],
additional_inputs_accordion="Model Parameters"
)
Additional input values are passed as extra arguments to your function after message and history.
Additional outputs
Display extra information alongside the chat:
import gradio as gr
def chatbot_with_metadata(message, history):
response = f"Echo: {message}"
metadata = {"message_length": len(message), "timestamp": "2024-01-01"}
return response, metadata
with gr.Blocks() as demo:
gr.ChatInterface(
fn=chatbot_with_metadata,
additional_outputs=[gr.JSON(label="Metadata")]
)
demo.launch()
Customization options
Title and description
demo = gr.ChatInterface(
fn=chatbot,
title="My Custom Chatbot",
description="This chatbot echoes your messages back to you."
)
Custom chatbot component
Provide your own configured Chatbot component:
import gradio as gr
custom_chatbot = gr.Chatbot(
height=600,
bubble_full_width=False,
avatar_images=("user.png", "bot.png")
)
demo = gr.ChatInterface(
fn=chatbot,
chatbot=custom_chatbot
)
Custom textbox
Customize the input textbox:
import gradio as gr
custom_textbox = gr.Textbox(
placeholder="Type your message here...",
lines=3
)
demo = gr.ChatInterface(
fn=chatbot,
textbox=custom_textbox
)
demo = gr.ChatInterface(
fn=chatbot,
submit_btn="Send Message",
stop_btn="Stop Generation",
clear_btn=None # Hide the clear button
)
Editable messages
Allow users to edit their previous messages:
import gradio as gr
demo = gr.ChatInterface(
fn=chatbot,
editable=True # Users can click to edit past messages
)
Retry and undo
ChatInterface automatically provides retry and undo buttons:
- Retry: Regenerates the last assistant response
- Undo: Removes the last user message and assistant response
These are built-in and require no additional code.
Flagging and feedback
Enable users to flag messages or provide feedback:
import gradio as gr
demo = gr.ChatInterface(
fn=chatbot,
flagging_mode="manual", # "manual" or "never"
flagging_options=["Like", "Dislike"],
flagging_dir="./flagged_chats"
)
Like/Dislike options appear as thumbs up/down icons next to messages.
Caching examples
Cache example responses for instant loading:
import gradio as gr
demo = gr.ChatInterface(
fn=slow_chatbot,
examples=[{"text": "Hello"}, {"text": "How are you?"}],
cache_examples=True, # Pre-compute responses
cache_mode="eager" # "eager" or "lazy"
)
API configuration
Control API endpoint visibility:
demo = gr.ChatInterface(
fn=chatbot,
api_name="chat", # Custom endpoint name
api_description="Chat with the bot",
api_visibility="public" # "public", "private", or "undocumented"
)
Advanced: Returning components
Return Gradio components directly from your function:
import gradio as gr
from PIL import Image
def multimodal_bot(message, history):
if "image" in message.lower():
# Return an Image component
img = Image.new('RGB', (300, 300), color='blue')
return gr.Image(value=img)
else:
return "I can only show images!"
demo = gr.ChatInterface(fn=multimodal_bot)
import gradio as gr
demo = gr.ChatInterface(
fn=chatbot,
concurrency_limit=5, # Max simultaneous users
show_progress="minimal", # "full", "minimal", or "hidden"
autoscroll=True, # Auto-scroll to new messages
autofocus=True # Focus input on load
)
Complete example
Here’s a fully-featured chatbot:
import gradio as gr
import random
import time
def chatbot(message, history, temperature):
responses = [
"That's interesting!",
"Tell me more.",
"I understand.",
"How does that make you feel?"
]
response = random.choice(responses)
# Stream the response
for i in range(len(response)):
time.sleep(0.05)
yield response[:i+1]
demo = gr.ChatInterface(
fn=chatbot,
title="Friendly Chatbot",
description="A simple conversational AI",
examples=[
{"text": "Hello!"},
{"text": "What's the weather like?"}
],
additional_inputs=[
gr.Slider(0, 1, value=0.7, label="Temperature")
],
flagging_mode="manual",
flagging_options=["Like", "Dislike"],
cache_examples=True
)
demo.launch()
See also