Skip to main content
Glass provides a comprehensive suite of AI capabilities powered by multiple language model providers. The AI features are deeply integrated into the editor to enhance your coding workflow.

Core AI Features

Glass offers three primary AI-powered features:

Agent Panel

Interactive AI agents for code generation, refactoring, and complex tasks

Inline Assistant

Context-aware AI assistance directly in your editor

Code Completion

Real-time AI-powered code suggestions as you type

Supported AI Providers

Glass supports multiple language model providers, allowing you to choose the best option for your workflow:
  • Anthropic Claude - Claude 4.6 Sonnet, Opus, and other Claude models
  • OpenAI - GPT-4, GPT-5, o1, o3, and other OpenAI models
  • OpenAI-compatible APIs - Any OpenAI-compatible endpoint
  • Ollama - Local models for privacy and offline usage
  • Google Gemini - Gemini models via Agent Servers
  • Custom providers - Extend with your own model providers
Most AI features require authentication with at least one provider. See the Provider Configuration guide for setup instructions.

Agent Types

Glass supports multiple agent implementations:

Native Agent (Zed Agent)

The built-in agent with full access to Glass features:
  • Tool support - Can edit files, run terminal commands, search code
  • Context awareness - Accesses project structure, rules, and prompts
  • Multi-model support - Works with any configured provider
  • Streaming responses - Real-time response generation

Text Thread Agent

A lightweight agent for conversational interactions:
  • Simple conversations - No tool execution
  • Fast responses - Optimized for quick interactions
  • Context attachment - Manual file and context inclusion

External Agents

Support for third-party agent implementations:
  • Claude Code (via Agent Server) - Official Anthropic agent
  • Gemini CLI - Google’s agent implementation
  • Codex - OpenAI’s code-focused agent
  • Custom agents - Via Agent Client Protocol (ACP)

Key Capabilities

Context Management

1

Project Context

Agents automatically access project structure, worktree information, and rules files (.zed.md, .rules, etc.)
2

Related Files

Semantic search finds relevant code files based on your current task
3

Prompt Library

Reusable prompts stored in ~/.config/zed/prompts/ for common tasks
4

Context Servers

Model Context Protocol (MCP) servers provide external context sources

Tool Execution

Native agents can execute tools with configurable permissions:
  • File Operations - Read, write, edit files with diff preview
  • Terminal Commands - Execute shell commands with approval
  • Code Search - Search across project with semantic and text search
  • Git Operations - View diffs, branch management
  • LSP Integration - Go to definition, find references
Tool permissions can be configured in settings. By default, most operations require confirmation.

Model Selection

Choose different models for specific tasks:
{
  "agent": {
    "default_model": {
      "provider": "anthropic",
      "model": "claude-4.6-sonnet"
    },
    "inline_assistant_model": {
      "provider": "anthropic",
      "model": "claude-4.6-sonnet"
    },
    "thread_summary_model": {
      "provider": "openai",
      "model": "gpt-4o-mini"
    }
  }
}

AI Configuration

Provider Setup

Configure API keys and endpoints for each provider

Settings

Customize AI behavior, tool permissions, and model parameters

Privacy and Data

When using AI features, Glass sends:
  • Your prompts and messages
  • Relevant file contents (based on context selection)
  • Project structure information
  • Code snippets from active files
You control what context is included via context servers and file selections.
For complete privacy, use Ollama or other local providers:
  • No data leaves your machine
  • Works offline
  • Full control over models
See Provider Configuration for Ollama setup.
API keys are stored securely:
  • macOS: Stored in Keychain
  • Linux: Stored in Secret Service (gnome-keyring, KWallet)
  • Windows: Stored in Credential Manager
Keys can also be set via environment variables.

Getting Started

1

Configure a Provider

Set up at least one AI provider in Settings
2

Open Agent Panel

Use cmd-shift-a (macOS) or ctrl-shift-a (Linux/Windows)
3

Start a Conversation

Type your question or task and press Enter
4

Review and Apply

Review suggested changes and apply them to your code

Next Steps

Build docs developers (and LLMs) love