Skip to main content
Configure Tabby to match your preferences and workflow. Settings are accessible through the system tray menu or keyboard shortcuts.

Environment Configuration

Tabby uses environment variables for core configuration across three components:

Frontend Application

Configure the Electron app in frontend/.env.local:
# Supabase (local Docker)
NEXT_PUBLIC_SUPABASE_URL="http://127.0.0.1:54321"
NEXT_PUBLIC_SUPABASE_ANON_KEY="your-anon-key"

# Application Branding
NEXT_PUBLIC_APP_NAME="Tabby"
NEXT_PUBLIC_APP_ICON="/logos/tabby-logo.png"

# Backend URLs
NEXT_PUBLIC_API_URL="http://localhost:3001"
NEXT_PUBLIC_MEMORY_API_URL="http://localhost:8000"

Next.js Backend

Configure the API backend in nextjs-backend/.env.local:
# Supabase
NEXT_PUBLIC_SUPABASE_URL="http://127.0.0.1:54321"
NEXT_PUBLIC_SUPABASE_ANON_KEY="your-anon-key"
SUPABASE_ADMIN="your-service-role-key"

# Email (Optional)
RESEND_API_KEY=""
RESEND_DOMAIN=""

# Application
NEXT_PUBLIC_APP_NAME="Tabby"
NEXT_PUBLIC_APP_ICON="/logos/tabby-logo.png"

# AI Providers (see AI Provider Configuration)
OPENAI_API_KEY=""
GOOGLE_GENERATIVE_AI_API_KEY=""
GROQ_API_KEY=""
CEREBRAS_API_KEY=""
OPENROUTER_API_KEY=""

# Tools
TAVILY_API_KEY=""

# Memory Backend
MEMORY_API_URL="http://localhost:8000"

Memory Backend

Configure the Python backend in backend/.env:
# OpenAI (Required)
OPENAI_API_KEY="sk-..."

# Supabase PostgreSQL
SUPABASE_CONNECTION_STRING="postgresql://postgres:[email protected]:54322/postgres"

# Neo4j (Optional)
NEO4J_URL=""
NEO4J_USERNAME=""
NEO4J_PASSWORD=""
NEO4J_DATABASE=""
AURA_INSTANCEID=""
AURA_INSTANCENAME=""
Restart all services after modifying environment variables for changes to take effect.

Application Branding

Customize the application name and icon:
VariableDefaultDescription
NEXT_PUBLIC_APP_NAMETabbyApplication name shown in UI
NEXT_PUBLIC_APP_ICON/logos/tabby-logo.pngPath to app icon
Branding changes require restarting the frontend and Next.js backend.

Keyboard Shortcuts

Tabby provides global keyboard shortcuts for quick access to features.

Global Shortcuts

ShortcutActionDescription
Ctrl+\Action MenuOpen/close AI action menu
Ctrl+SpaceAI SuggestionGet context-aware AI completion
Ctrl+Shift+BBrain PanelToggle memory dashboard
Ctrl+Alt+IInterview GhostInterview mode ghost text
Ctrl+Alt+JVoice AgentActivate voice assistant
Ctrl+Shift+XStop AutotypingStop AI typewriter mode
Ctrl+Shift+TCycle TranscribeSwitch transcription modes
Ctrl+Alt+TToggle TranscriptionEnable/disable voice transcription

Copilot Mode (Coding Interview)

ShortcutActionDescription
Alt+XAnalyze ProblemCapture screen & analyze coding problem
Alt+Shift+XUpdate AnalysisAdd new constraints to analysis
Alt+NCode SuggestionsGet code improvements
Ctrl+1-6Switch TabsNavigate between Chat, Idea, Code, Walkthrough, Test Cases, Memories

Action Menu

ShortcutActionDescription
Ctrl+\Open MenuShow action menu with selected text
TabQuick ChatSwitch to quick AI chat mode
Alt+[Key]Trigger ActionExecute specific action (e.g., Fix Grammar)

Window Controls

ShortcutActionDescription
Ctrl+ArrowMove WindowReposition floating windows
EscBack/CloseClose panels or go back
EnterAcceptAccept result and paste
Keyboard shortcuts are currently hardcoded. Future versions may support customization.

System Tray

Tabby runs in the system tray for quick access. Right-click the tray icon for:
  • Show Actions Menu - Open AI action menu
  • Brain Panel - View memory dashboard
  • Settings - Application preferences (future)
  • Quit - Exit application

Text Output Modes

Tabby supports two modes for AI-generated text output:

Paste Mode (Default)

Standard clipboard paste. AI responses are copied to clipboard and pasted normally.
  • Speed: Instant
  • Detection: Standard paste operation
  • Use Case: General use, quick insertions

Typewriter Mode

AI types character-by-character, simulating human typing.
  • Speed: Character-by-character
  • Detection: Undetectable as AI-generated
  • Use Case: Situations requiring human-like input
  • Shortcut: Ctrl+Shift+X to stop
Typewriter mode is slower but appears as natural typing. Use responsibly.

Service Ports

Default ports for Tabby services:
ServicePortURL
Frontend (Electron)3000http://localhost:3000
Next.js Backend3001http://localhost:3001
Memory API (Python)8000http://localhost:8000
Windows MCP Server8001http://localhost:8001
Supabase API54321http://127.0.0.1:54321
Supabase Studio54323http://localhost:54323
Supabase DB54322postgresql://[email protected]:54322/postgres
Port conflicts can be resolved by modifying the respective service configurations.

Email Configuration (Optional)

Tabby supports email notifications via Resend:
nextjs-backend/.env.local
RESEND_API_KEY="re_..."
RESEND_DOMAIN="your-domain.com"
  1. Visit Resend
  2. Create an account or sign in
  3. Navigate to API Keys
  4. Create a new API key
  5. Add a verified domain (or use Resend’s test domain)

Windows MCP Integration (Optional)

Tabby supports Windows desktop automation via the Windows MCP server:
cd frontend
pnpm run windows-mcp
Server runs on http://localhost:8001
Windows MCP requires Python and uvx. Install via: pip install uvx

Built-in Actions

Tabby includes pre-configured AI actions accessible via Ctrl+\:

Text Transformation

  • Fix Grammar - Correct grammatical errors
  • Shorten - Make text more concise
  • Expand - Add more detail and context

Tone Adjustment

  • Professional - Formal business tone
  • Casual - Friendly, relaxed tone
  • Friendly - Warm and approachable

Content Generation

  • Email Writer - Compose professional emails
  • Custom Prompts - Create your own actions
Custom actions and prompts will be configurable in future releases.

Copilot Tabs

When using coding interview mode (Alt+X), switch between tabs:
TabShortcutContent
ChatCtrl+1Free-form conversation with context
IdeaCtrl+2Problem breakdown, observations, approach
CodeCtrl+3Clean, commented implementation
WalkthroughCtrl+4Step-by-step solution explanation
Test CasesCtrl+5Edge cases with input/output/reasoning
MemoriesCtrl+6Retrieved memories about your preferences

AI Suggestions

Tabby provides two modes for AI suggestions:

Hotkey Mode (Default)

Manual trigger via Ctrl+Space:
  • No automatic monitoring
  • Privacy-friendly
  • On-demand completions

Auto Mode (Clipboard Watcher)

Automatic suggestions based on clipboard:
  • Monitors clipboard for context
  • Automatic completions
  • More proactive assistance
Auto mode monitors clipboard activity. Use with awareness of privacy implications.

Production Deployment

Frontend (Windows Executable)

cd frontend
npm run dist
Output: frontend/dist/Tabby-Setup.exe

Next.js Backend (Vercel)

Automatic deployment from main branch:

Memory Backend (Azure)

Automatic deployment via GitHub Actions:
Production deployments require setting up GitHub secrets for credentials and API keys.

Troubleshooting

  • Check if another app is using the same shortcuts
  • Restart the Electron application
  • Verify the app is running (check system tray)
  • Run as administrator if on Windows
  • Verify Docker Desktop is running (for Supabase)
  • Check for port conflicts (3000, 3001, 8000)
  • Review environment variables in .env.local files
  • Check console logs for specific error messages
  • Verify API keys are set in nextjs-backend/.env.local
  • Check Next.js backend is running on port 3001
  • Review browser console for network errors
  • Ensure AI provider has credits/quota available
  • Verify memory backend is running on port 8000
  • Check MEMORY_API_URL in frontend and backend configs
  • Ensure Supabase is running (npx supabase status)
  • Review memory backend logs for errors

Best Practices

Development Workflow

  1. Start Docker Desktop first
  2. Run npx supabase start and note credentials
  3. Configure all .env.local files with Supabase keys
  4. Start services in order: Memory backend → Next.js backend → Frontend
  5. Use system tray for quick access to features

Security

  • Keep .env files out of version control
  • Use .env.example as templates
  • Rotate API keys regularly
  • Use local Supabase for development
  • Secure production deployments with proper authentication

Performance

  • Use Groq for fast inference when speed is critical
  • Reserve powerful models for complex tasks
  • Monitor memory usage in Brain Panel
  • Clean up old memories periodically
  • Optimize vector store indexes for your data size

Next Steps

AI Providers

Configure OpenAI, Groq, and other AI providers

Memory Backend

Set up persistent memory with Mem0

Build docs developers (and LLMs) love