Overview
The AI Input feature provides:- Context-Aware Assistance: AI sees the last 50 lines of your terminal output
- Multiple Providers: Support for OpenAI and Google Gemini
- Command Suggestions: Get commands directly inserted into your terminal
- Smart Response Handling: Automatic extraction of commands from markdown code blocks
Supported Providers
Termy supports two AI providers:OpenAI
- Default Model:
gpt-5-mini(fromcrates/openai/src/lib.rs:4) - API Endpoint:
https://api.openai.com/v1/chat/completions - Configurable Models: Any GPT model or o-series model
Google Gemini
- Default Model:
gemini-2.5-flash(fromcrates/gemini/src/lib.rs:4) - API Endpoint:
https://generativelanguage.googleapis.com/v1beta/openai/chat/completions - Configurable Models: Any Gemini model
Both providers use OpenAI-compatible chat completion APIs for consistency.
Configuration
Configure AI settings through Settings > Advanced > AI:API Keys
OpenAI:- Get your API key from platform.openai.com
- Set
openai_api_keyin configuration
- Get your API key from Google AI Studio
- Set
gemini_api_keyin configuration
Usage
Opening AI Input
Via Command:- Open command palette
- Type “AI Input”
- Press Enter
Submitting Queries
- Type your question or request
- Press
Enterto submit - Wait for the AI response (loading toast appears)
- Response is inserted into your terminal input buffer
- “Find all JavaScript files modified in the last day”
- “Explain this error”
- “Create a git commit message for these changes”
- “Write a command to compress all PDFs in this directory”
Keyboard Controls
Enter- Submit query to AIEscape- Close AI input without submitting
How It Works
Context Collection
When you open AI input, Termy captures terminal context:- Recent command output
- Error messages
- Current working directory (if visible)
- Command prompts
Request Flow
- User submits query → Input text sent to AI
- Context bundled → Last 50 terminal lines attached
- System prompt added:
- API called → Request sent to configured provider
- Response processed → Markdown code blocks stripped
- Command inserted → Response placed in terminal input
Response Processing
Termy automatically strips markdown formatting from AI responses:Response Display
Loading State
Success Toast
Error Handling
- API key is missing or invalid
- Network request fails
- API returns an error
- Response parsing fails
Implementation Details
OpenAI Client
Location:crates/openai/src/lib.rs
Gemini Client
Location:crates/gemini/src/lib.rs
ureq and run in background threads via smol::unblock.
UI Component
Location:src/terminal_view/ai_input.rs
Use Cases
Command Generation
Query: “Find all files larger than 100MB” Response:Error Debugging
Query: “What does this error mean?” AI analyzes the error in your terminal context and provides an explanation.Script Writing
Query: “Write a script to backup all .txt files to ~/backup” Response:Git Assistance
Query: “Create a commit message for these changes” AI reviews thegit diff output in your terminal and suggests a commit message.
Best Practices
Context Awareness
For example:Query Specificity
Be specific in your queries: Good:- “Find all Python files modified today”
- “Explain this SSH error”
- “Create a git alias for interactive rebase”
- “Help”
- “What’s wrong?”
- “Fix it”
Command Review
The command is inserted into your input buffer (not executed), giving you a chance to review and edit.Model Selection
Choosing a Model
Consider these factors: Speed:gpt-5-mini- Fast, inexpensivegemini-2.5-flash- Very fast, free tier available
gpt-4- More accurate, better reasoninggemini-2.0-pro- Advanced capabilities
- Mini/Flash models: Lower cost per token
- Pro models: Higher cost, better results
Custom Models
Override the default model in configuration:Troubleshooting
API Key Not Configured
Error:"OpenAI API key not configured. Set it in Settings > Advanced > AI."
Solution:
- Open Settings
- Navigate to Advanced > AI
- Enter your API key
- Save configuration
Network Errors
Error:"AI error: HTTP request failed"
Causes:
- No internet connection
- Firewall blocking API access
- API endpoint unavailable
- Check network connectivity
- Verify firewall rules
- Try again later if API is down
Invalid Response
Error:"AI error: No response content"
Causes:
- Model returned empty response
- Response parsing failed
- Rate limiting
- Try a different query
- Check API quota/billing
- Switch to a different model
Rate Limiting
If you hit rate limits:- Wait before retrying
- Upgrade your API plan
- Switch to a different provider
Privacy & Security
Data Sent to AI
When you use AI input, the following is sent to your chosen provider:- Your query - The text you typed
- Terminal context - Last 50 lines of terminal output
- System prompt - Instructions for the AI
Data Storage
Termy does not:- Store AI queries or responses
- Log terminal content sent to AI
- Share data beyond your chosen provider
Related
- Configuration - Configure AI provider settings
- Keyboard Shortcuts - AI input keybindings