Skip to main content
CodeFire is designed to give your AI coding agent persistent memory and context awareness. This guide covers common workflows and tips for effective collaboration.

Core Workflows

Code Review Workflow

Use the task launcher to spin up a code review agent.
1
Open the Project
2
Click the project in CodeFire’s home view to open the project window.
3
Click Code Review
4
In the Task Launcher panel (Dashboard tab), click the Code Review preset.
5
Review Generated
6
CodeFire opens a new terminal tab and starts a Claude Code session with the prompt:
7
Review the recent code changes in this project. Look for bugs, 
security issues, and suggest improvements. Focus on the most 
recently modified files.
8
Monitor Session
9
Switch to the Sessions tab to watch:
10
  • Token usage (input/output/cache)
  • Cost tracking (real-time)
  • Files touched
  • Tools invoked
  • 11
    Review Results
    12
    Once the agent finishes:
    13
  • Check the Activity Feed for a summary
  • Review suggested changes in the terminal
  • Session is automatically saved to history
  • Pro tip: Create a task called “Code Review” in the planner to track review findings. The agent can add notes to the task via MCP.

    Debugging Workflow

    When you encounter a bug, use the Debug preset to investigate.
    1
    Create a Task
    2
    In the Planner tab, create a task:
    3
  • Title: “Fix login redirect bug”
  • Description: Paste error logs or reproduction steps
  • Priority: High
  • 4
    Launch Debug Agent
    5
    Click the Debug preset in the Task Launcher. The agent sees the task via MCP and knows what to investigate.
    6
    Agent Investigates
    7
    The agent:
    8
  • Reads the task description (via codefire_tasks_list)
  • Searches the codebase for relevant files
  • Analyzes error-prone patterns
  • Suggests fixes
  • 9
    Update Task
    10
    As you implement fixes:
    11
  • Move the task to In Progress
  • Add notes with findings
  • Move to Done when resolved
  • 12
    Review Session
    13
    Check the Sessions tab to see:
    14
  • Total cost of the debugging session
  • Which files the agent examined
  • Tools used (e.g., grep, read, edit)
  • Pro tip: Pin notes about tricky bugs so future agents (or you) can reference them.

    Writing Tests Workflow

    Generate test coverage for critical business logic.
    1
    Identify Untested Code
    2
    Run your test coverage tool:
    3
    npm run test:coverage
    
    4
    Note which files have low coverage.
    5
    Launch Test Agent
    6
    Click the Write Tests preset. The agent prompts:
    7
    Analyze the codebase and write tests for any untested or 
    under-tested code. Focus on critical business logic and edge cases.
    
    8
    Guide the Agent
    9
    If needed, provide additional context:
    10
    Focus on src/auth/login.ts — it has 0% coverage.
    
    11
    The agent will:
    12
  • Read the file
  • Identify edge cases
  • Generate test cases
  • Write tests using your project’s test framework
  • 13
    Run Tests
    14
    After the agent finishes:
    15
    npm test
    
    16
    Verify all tests pass.
    17
    Track Coverage
    18
    Create a task: “Increase test coverage to 80%”. Update it as you add tests.
    Pro tip: Use the Sessions tab to see which files the agent tested. Add a note for future reference.

    Refactoring Workflow

    Improve code quality without changing behavior.
    1
    Create a Refactoring Task
    2
    In the Planner, create:
    3
  • Title: “Refactor user service”
  • Description: “Extract database logic into repository pattern”
  • 4
    Launch Refactor Agent
    5
    Click the Refactor preset. The agent sees your task and understands the goal.
    6
    Review Changes
    7
    The agent will:
    8
  • Identify duplication
  • Suggest design improvements
  • Refactor code incrementally
  • Run tests after each change
  • 9
    Verify No Regressions
    10
    Run tests and ensure behavior is unchanged:
    11
    npm test
    
    12
    Commit Changes
    13
    Commit the refactor incrementally:
    14
    git add .
    git commit -m "Refactor: Extract user repository"
    
    15
    CodeFire tracks the commit in the GitHub tab.
    Pro tip: Use the Sessions tab to estimate refactoring time and cost for future projects.

    Using the Task Launcher

    The Task Launcher provides one-click access to common workflows.

    Available Presets

    PresetIconUse Case
    Code Review👁️Review recent changes for bugs and improvements
    Write TestsGenerate test coverage for untested code
    Debug🐛Investigate bugs and error-prone patterns
    Refactor🔄Improve code quality and reduce duplication
    Documentation📄Add or improve documentation and comments
    Security Audit🔒Check for vulnerabilities and security issues

    Custom Prompts

    For one-off tasks, use the Custom prompt field: Examples:
    • “Add dark mode support to the settings page”
    • “Optimize the database query in user-service.ts”
    • “Write a migration script to add a new column to users table”
    Press Enter or click the arrow to launch.

    Managing Sessions and Costs

    CodeFire tracks every Claude Code session with detailed cost and usage data.

    Viewing Session History

    1. Click the Sessions tab
    2. Browse all past sessions for the project
    3. Click a session to see:
      • Token usage (input, output, cache)
      • Cost breakdown (by token type)
      • Files changed
      • Tools invoked
      • Message count

    Live Session Monitoring

    When a Claude Code session is active:
    1. Switch to the Sessions tab
    2. CodeFire displays a Live Activity Feed:
      • Real-time token usage
      • Running cost tracker
      • Files being read/written
      • Tools invoked (bash, read, edit, etc.)

    Cost Tracking

    CodeFire calculates costs based on Anthropic’s pricing:
    • Input tokens: $3 per million
    • Output tokens: $15 per million
    • Cache creation: $3.75 per million
    • Cache read: $0.30 per million
    Per-project costs: The Dashboard tab shows total cost across all sessions for the project. Global costs: The home view shows total cost across all projects.

    Reducing Costs

    Use prompt caching: Claude Code automatically caches project context. Subsequent sessions reuse cached tokens (cheaper). Be specific: Provide clear, focused prompts to reduce unnecessary tool invocations. Monitor token usage: If a session is using excessive tokens, cancel it (Ctrl+C) and refine your prompt. Batch tasks: Instead of launching multiple sessions, combine related tasks into one prompt:
    1. Fix the login bug in auth.ts
    2. Add tests for the login flow
    3. Update the README with new auth docs
    

    Dev Tools

    CodeFire auto-detects your project’s package manager and provides quick-launch buttons.

    Auto-Detected Commands

    CodeFire detects:
    • npm (package.json + package-lock.json)
    • yarn (package.json + yarn.lock)
    • pnpm (package.json + pnpm-lock.yaml)
    • bun (package.json + bun.lockb)
    • pip (requirements.txt)
    • poetry (pyproject.toml)
    • cargo (Cargo.toml)

    Quick-Launch Buttons

    In the Dashboard tab, you’ll see buttons for:
    • Dev — Start dev server (npm run dev, cargo run, etc.)
    • Build — Build the project (npm run build, cargo build)
    • Test — Run tests (npm test, pytest, etc.)
    • Lint — Run linter (npm run lint, cargo clippy)
    Click any button to launch the command in a new terminal tab.

    Tips for Effective AI Collaboration

    1. Use Tasks as Context

    Create tasks before launching agents. The agent can read tasks via MCP and understand what you’re working on. Example:
    • Task: “Add user profile page”
    • Agent prompt: “Implement the user profile page task”
    • Agent reads the task, sees requirements, and implements the feature.

    2. Pin Important Notes

    Use the Notes tab to store:
    • Architecture decisions
    • Tricky bug fixes
    • API design patterns
    • Deployment checklists
    Pinned notes are prioritized in context when the agent searches.

    3. Review Session History

    Before starting a new session, check Sessions to see:
    • What the previous session did
    • Which files were changed
    • Whether tests were run
    This avoids duplicate work.

    4. Use the MCP Server

    The agent can:
    • List tasks (codefire_tasks_list)
    • Create tasks (codefire_tasks_create)
    • Update task status (codefire_tasks_update)
    • Search notes (codefire_notes_search)
    • Read codebase profile (codefire_codebase_profile)
    Encourage the agent to use these tools:
    Before implementing, check the existing tasks to see if 
    this feature is already planned.
    

    5. Monitor Live Sessions

    Keep the Sessions tab visible during long-running tasks. If token usage spikes unexpectedly:
    • The agent may be stuck in a loop
    • The prompt may be too broad
    • Cancel and refine
    Instead of launching separate sessions for:
    • Write feature
    • Write tests
    • Update docs
    Combine into one session:
    1. Implement user profile page
    2. Add tests for profile page
    3. Update README with profile page docs
    
    This reduces overhead and improves context reuse.

    7. Use Custom Prompts for Specific Goals

    Presets are great for common tasks, but custom prompts give you full control: Specific file:
    Optimize the search query in src/services/search.ts
    
    Specific framework:
    Refactor the React components to use TypeScript strict mode
    
    Specific constraint:
    Add error handling to the API routes without changing the response format
    

    Advanced Workflows

    Multi-Session Projects

    For large features, break work into multiple sessions: Session 1: Research
    Analyze the codebase and propose an architecture for the 
    user notification system.
    
    Session 2: Implementation
    Implement the notification system based on the architecture 
    from the previous session.
    
    Session 3: Testing
    Write comprehensive tests for the notification system.
    
    CodeFire tracks all sessions, so you can reference past work.

    Email-to-Task-to-Agent

    1. Email arrives — Client requests a feature via email
    2. Auto-task created — CodeFire creates a task from the email (via Gmail integration)
    3. Launch agent — Click the task, then launch a custom prompt:
      Implement the feature described in task #42
      
    4. Agent completes — Task is moved to Done, client is notified
    This creates a fully automated workflow from request to completion.

    PR Review Workflow

    1. PR created — Developer opens a PR on GitHub
    2. CodeFire detects — GitHub tab shows the PR
    3. Launch review agent — Custom prompt:
      Review PR #123 and suggest improvements
      
    4. Agent reviews — Comments on code quality, tests, etc.
    5. Merge — Once approved, merge via gh

    Next Steps

    CLI Integration

    Set up your AI coding CLI

    MCP Server

    Learn about MCP tools

    Build docs developers (and LLMs) love