Design Philosophy
Avante.nvim is built on three core principles:- Native Neovim Integration: Deep integration with Neovim’s architecture, leveraging its buffer system, UI components, and plugin ecosystem
- Flexibility: Support for multiple AI providers and interaction modes to suit different workflows
- Transparency: Clear visibility into AI operations with tool execution logs, permission controls, and prompt logging
Core Components
Avante.nvim’s architecture consists of several key components working together:Sidebar System
The sidebar (lua/avante/sidebar.lua) is the primary interface for interacting with Avante. It manages:
- Result display: Shows AI responses and code suggestions
- Input handling: Captures user prompts and commands
- File selection: Manages context files for AI conversations
- History management: Maintains conversation history across sessions
LLM Engine
The LLM engine (lua/avante/llm.lua) handles all AI provider interactions:
- Manages streaming responses from AI providers
- Coordinates tool execution and autonomous workflows
- Handles conversation history and memory management
- Supports both traditional chat and agentic modes
Provider System
The provider system (lua/avante/providers/) abstracts different AI services:
- Supported providers: Claude, OpenAI, Gemini, Copilot, Azure, and many more
- Custom providers: Extensible architecture for adding new AI services
- ACP providers: Special providers using the Agent Client Protocol for enhanced capabilities
Tool System
The tool system (lua/avante/llm_tools/) enables autonomous code editing:
- File operations (read, write, edit, create)
- Code search and navigation (grep, glob, diagnostics)
- Shell command execution (bash)
- Task management (write_todos, read_todos)
- Version control (undo_edit)
Interaction Modes
Avante supports two primary interaction modes:- Agentic Mode
- Legacy Mode
The modern, autonomous approach where the AI can:
- Execute tools to read and modify files
- Search through your codebase
- Run shell commands
- Autonomously implement changes
Key Features
Project Context
Avante can understand your project through:- File selection: Explicitly add files to conversation context
- Repo mapping: Automatic codebase structure analysis
- Diagnostics: Integration with LSP diagnostics
- RAG service: Optional semantic search over your codebase
Permission System
Control what AI can do in your project:History & Memory
Avante maintains conversation context across sessions:- Chat history stored per-project
- Automatic memory summarization for long conversations
- History selector to resume previous conversations
- Token counting to manage context limits
Configuration Structure
Avante’s configuration follows a hierarchical structure:Next Steps
Modes
Learn about agentic vs legacy modes
Providers
Understand AI provider architecture
Agentic Workflow
Deep dive into autonomous tools
Zen Mode
CLI-like experience in Neovim