Skip to main content
Cody offers a rich set of features that help you write better code faster. This page covers each capability in detail.

Chat

Chat lets you ask questions, generate code, and apply edits — all with your codebase as context.

How chat works

Cody answers questions by searching your codebase and retrieving relevant context before generating a response. It uses Sourcegraph’s native search, keyword search, and code graph analysis to find the most relevant files and symbols. When you open a new chat, Cody automatically includes context chips for your current file and repository. You can edit or remove these chips, or add new ones using @-mentions.

Adding context with @-mentions

Type @ in the chat input to add context from specific sources:
MentionWhat it includes
@filenameFull contents of a file
@filepath:1-50A specific line range from a file
@#symbolNameDefinition of a function, class, or type
@repo-nameContext from a remote repository
@https://...Content from a web URL
When both a repository and files are @-mentioned, Cody searches the repository while prioritizing the explicitly mentioned files.

Image upload

Cody supports uploading images to chat when using a vision-capable model. You can drag and drop an image, use the attach button, or paste from the clipboard. Models with vision support include Claude 3.7 Sonnet, Claude Sonnet 4, Claude Opus 4 and 4.1, and GPT-4o.
Gemini models support vision, but Cody clients do not currently support image uploads to Gemini.

LLM selection

Enterprise users can choose a different LLM for chat using the model selector in the chat panel. Claude 3.5 Sonnet is the default. Supported models include Claude, GPT-4o, Gemini, DeepSeek, and more — available based on your plan and configuration.

Smart Apply

When Cody provides a code suggestion, click Apply to use Smart Apply. Cody analyzes your open file, finds the right location for the code, and creates a diff for you to review. You can also click Execute for terminal command suggestions to run them directly in your terminal. Smart Apply uses a Qwen 2.5 Coder model for low-latency edits (or Claude Sonnet for Enterprise users not on Cody Gateway).

Writing code

Ask Cody to write code in the chat panel. For each code suggestion, you can:
  • Copy the code to your clipboard
  • Insert it at your cursor position
  • Save it to a new file
  • Apply it with Smart Apply

Rerunning with different context

If a response is not helpful, you can rerun the prompt with different context:
  • Public knowledge only: Cody uses only its base training, not your codebase
  • Current file only: Re-runs the prompt with just the current file as context
  • Add context: Opens the @-mention picker to explicitly add more sources

Auto-edit

Auto-edit suggests code changes based on cursor movement and recent edits. Unlike autocomplete, auto-edit can both insert and delete text, and can propose changes before and after the cursor position.
Auto-edit requires Sourcegraph v6.0+ and is available for Enterprise accounts on Cody Gateway. Fireworks must be enabled as a provider.

Auto-edit vs. autocomplete

FeatureAutocompleteAuto-edit
Text modificationInsert onlyInsert and delete
Cursor positionAfter cursor onlyBefore and after cursor
TriggerEvery character typedContext and recent edits

What auto-edit can do

  • Function signature refactoring: Suggests adding error returns, new parameters, or changing return types when you modify a function signature.
  • Call site updates: When you change a function’s signature, auto-edit detects call sites and suggests matching updates.
  • Test file maintenance: Recognizes repetitive patterns across test files and suggests consistent updates to assertions, mocks, and test data.
  • Parameter refactoring: Helps with adding, removing, or reorganizing function parameters across the codebase.
  • Type system modifications: Suggests consistent changes to variable declarations, function parameters, and return types when you update an interface or type definition.

Enabling auto-edit

Auto-edit is on by default for Enterprise users. You can switch back to autocomplete in the extension settings.
Auto-edit is enabled by default. To switch back to autocomplete, open the Cody extension settings and change Suggestion mode to autocomplete.To configure Fireworks access, add fireworks::* to the allowed providers in your site configuration:
"modelConfiguration": {
  "sourcegraph": {
    "modelFilters": {
      "statusFilter": ["beta", "stable"],
      "allow": [
        "anthropic::*",
        "openai::*",
        "google::*",
        "fireworks::*"
      ]
    }
  }
}

Disabling auto-edit

Site admins can disable auto-edit for their organization by setting the feature flag cody-autoedit-experiment-enabled-flag to disabled in the site configuration. This reverts all users to autocomplete as the default suggestion mode.
Self-hosted customers get autocomplete as the default. They cannot opt in to auto-edit without Cody Gateway access.

Autocomplete

Cody provides real-time single-line and multi-line code completions as you type.

How autocomplete works

When you start typing, Cody analyzes the code around your cursor and predicts what you are trying to write. Suggestions appear as dimmed gray text. Press Tab or Enter to accept a suggestion, or Esc to dismiss it. The default autocomplete model for Enterprise users is DeepSeek V2, which is optimized for responsiveness and accuracy. Key improvements over the previous model:
  • P75 latency reduced by 350 ms
  • Multi-line completions are more accurately aligned with surrounding context
  • Average completion acceptance rate (CAR) improved by more than 4%
Autocomplete supports a wide range of languages because it uses LLMs trained on broad data.

Enabling autocomplete

Autocomplete is enabled by default in all IDE extensions. There is a checkbox in the extension settings to confirm or change this. JetBrains IDEs include additional settings to customize the colors and styles of autocomplete suggestions.

Configuring autocomplete for Enterprise

Only site admins can change autocomplete model configuration.
To use a custom model for autocomplete:
  1. Go to Site admin > Configuration > Site configuration
  2. Edit the modelConfiguration section to specify your preferred autocomplete model
  3. Click Save
Supported models include DeepSeek V2, Claude, and others configurable by your site admin.

Prompts

Prompts are reusable templates that automate common coding tasks. They are stored in the Prompt Library and accessible from the Cody chat panel in any supported editor.

Built-in prompts

Cody includes four core prompts available to all users:
PromptWhat it does
document-codeGenerates inline documentation for the selected code
explain-codeProvides a plain-language explanation of the selected code
find-code-smellsIdentifies potential issues, anti-patterns, or code quality concerns
generate-unit-testsGenerates unit tests for the selected function or class

Creating a custom prompt

1

Open the Prompt Library

Go to Tools > Prompt Library in your Sourcegraph Enterprise instance, or access it from the Cody panel in your IDE.
2

Create a new prompt

Click Create new prompt and fill in the following:
  • Owner: Your user account or an organization
  • Prompt name: A unique, descriptive name
  • Description: A short summary of what the prompt does
  • Prompt template: The full prompt instructions
  • Visibility: Public or Private
  • Mode: Chat only, or Edit code (for prompts that modify files)
3

Add dynamic context

In the prompt template, use @-mentions to include specific or dynamic context:
  • @currentSelection — the user’s currently selected code
  • @currentFile — the currently open file
  • @currentRepository — the current repository
  • @currentDirectory — the current working directory
Type @ to open the context picker, or click the dynamic context buttons below the editor.
4

Publish the prompt

Once ready, click Create prompt. The prompt appears in the Prompt Library and in the Cody chat panel for the appropriate users.
Prompts in Edit code mode can only be run from the IDE editor, not from the Prompt Library web UI.

Prompt visibility

VisibilityWho can use it
PrivateOnly you
Owned by organizationAll members of that organization
PublicAll users (only site admins can set this)

Draft prompts

Mark a prompt as draft to work on it before publishing. Draft prompts are only visible to you.

Prompt tags

Prompt tags are an Enterprise feature and can only be created by site admins.
Site admins can create tags to categorize prompts. Users can assign tags to their prompts while creating or editing them. Tags appear as filters in the Prompt Library sidebar. Site admins can promote a prompt to highlight it for their organization. Promoted prompts appear at the top of the prompt list in Cody chat and are marked with a special icon.
Promoted prompts are supported in VS Code and Cody Web.

Debug code

Cody is optimized to help you identify and fix errors quickly.

Ask Cody to fix an error

In VS Code and JetBrains, when VS Code detects an error in your code:
  1. A red underline and a lightbulb icon appear next to the error
  2. Click the lightbulb and select Ask Cody to fix
  3. Cody analyzes the error and proposes a fix as a diff
  4. Review the diff and click Accept to apply it or Undo to discard

Debug with chat

You can also paste an error message or stack trace directly into the Cody chat panel and ask for help:
  • “Why is this throwing a TypeError: Cannot read properties of undefined?”
  • “Explain this stack trace and suggest a fix”
  • “What’s causing this infinite loop?”
Cody uses context from your current file and repository to give specific, actionable answers.

Inline edit for targeted fixes

Select the problematic code, press Opt+K / Alt+K, and describe the fix you want. Cody generates a diff that you can accept, reject, or iterate on.

Context filters

Context Filters let admins control which repositories Cody can use as context in its requests to third-party LLMs.
Context Filters are available for Cody Enterprise users. VS Code requires extension v1.20.0 or later; JetBrains requires plugin v6.0.0 or later.

How context filters work

In your Sourcegraph site configuration, define include and exclude rules that filter the list of repositories Cody can access:
  • Repositories in the exclude list cannot be used as context. Attempting to run prompts on excluded repositories shows an error.
  • Cody chat still works normally on excluded repositories — only context fetching is blocked.

Example configuration

"cody.contextFilters": {
  "include": [
    { "repoNamePattern": "^github.com/my-org/.*" }
  ],
  "exclude": [
    { "repoNamePattern": "^github.com/my-org/internal-secrets$" }
  ]
}
This configuration allows all repositories under my-org as context sources, except for internal-secrets.

Build docs developers (and LLMs) love