Skip to main content
HAI Build Code Generator harnesses cutting-edge AI capabilities to transform how you write, edit, and manage code. Built on Cline’s powerful foundation, it integrates with multiple LLM providers to deliver context-aware, intelligent code generation and editing.
AI-powered coding in action

How It Works

The AI engine operates through a sophisticated multi-step workflow:
  1. Context Collection: Gathers workspace files, open tabs, and project structure
  2. System Prompt Generation: Constructs detailed instructions based on your task and environment
  3. LLM Communication: Sends requests to configured AI models with rich context
  4. Tool Execution: Processes model responses and executes file operations, commands, and searches
  5. Iterative Refinement: Continues the loop until task completion

Supported LLM Providers

HAI Build supports a wide range of AI providers for maximum flexibility:
  • Anthropic: Claude 4.5 Sonnet, Claude 3 Opus, Claude Haiku
  • OpenAI: GPT-4, GPT-4 Turbo, GPT-3.5
  • Google: Gemini 2.0, Gemini Pro, Gemini Flash
  • AWS Bedrock: Claude, Llama, Command models
  • GCP Vertex AI: Full model catalog
For detailed provider configuration, see LLM Providers Configuration guide.

Code Generation Workflows

Plan and Act Mode

HAI Build features a dual-mode system for handling complex tasks: Plan Mode:
  • Creates detailed implementation plans before writing code
  • Uses reasoning models for strategic thinking
  • Generates comprehensive task breakdowns
  • Perfect for complex features and refactoring
Act Mode:
  • Executes code changes directly
  • Fast iteration on well-defined tasks
  • Ideal for bug fixes and small features
1

Configure Models

Set separate models for Plan and Act modes in Settings → API Configuration.
{
  "planModeApiProvider": "anthropic",
  "planModeModelId": "claude-4.5-sonnet-20250514",
  "actModeApiProvider": "anthropic",
  "actModeModelId": "claude-4-sonnet-20250514"
}
2

Start Planning

Use /deep-planning command to create detailed implementation plans.
3

Execute

Switch to Act mode to implement the plan step-by-step.

Native Tool Calling

HAI Build leverages native function calling for precise tool execution:
  • Parallel Execution: Models can call multiple tools simultaneously
  • Structured Output: Tools receive properly formatted parameters
  • Error Handling: Automatic retry with error context
  • Context Preservation: Tool responses integrate seamlessly into conversation
Supported in:
  • Claude 4.5+ (Anthropic)
  • GPT-4 and GPT-4 Turbo (OpenAI)
  • Gemini 2.0+ (Google)

Intelligent Context Management

Automatic Context Window Handling

The AI engine automatically manages context limits:
When approaching token limits, HAI Build:
  1. Summarizes older conversation history
  2. Preserves critical file contents
  3. Maintains task continuity across resets
  4. Executes PreCompact hooks for custom context injection
Tracks which files are in the AI’s context:
  • Shows context status in file explorer
  • Automatically adds relevant files
  • Warns when removing files from context
  • Provides context usage metrics (see src/core/context/context-tracking/FileContextTracker.ts:9)

Environment Context

HAI Build automatically includes:
  • Operating system and platform details
  • Available CLI tools (git, npm, docker, etc.)
  • Workspace structure and multi-root support
  • Active terminal sessions and shells
See implementation: src/core/context/context-tracking/EnvironmentContextTracker.ts:8

Advanced Features

Custom Instructions

Tailor AI behavior with custom rules:

Cline Rules

Create .clinerules files to define coding standards, patterns, and preferences.

External Rules

Import rules from .cursorrules, .windsurfrules, or .agentrules.

System Prompt Customization

The system prompt is built from modular components:
// Core sections (src/core/prompts/system-prompt/components/)
- capabilities: Available tools and actions
- editing_files: File operation guidelines
- mcp: Model Context Protocol servers
- task_progress: Focus chain integration
Each component can be customized via prompt variants.

Reasoning Effort Control

For OpenAI o1/o3 models, control reasoning depth:
{
  "planModeReasoningEffort": "high",
  "actModeReasoningEffort": "medium"
}
Options: low, medium, high

Code Editing Intelligence

Two-Tool Approach

HAI Build uses specialized tools for different editing scenarios: write_to_file: Complete file creation or replacement
  • New file scaffolding
  • Boilerplate generation
  • Major refactoring requiring full rewrites
replace_in_file: Surgical, targeted edits
  • Function updates
  • Variable renames
  • Localized changes
  • Multiple SEARCH/REPLACE blocks in one operation
The AI automatically detects auto-formatting in your editor and adapts subsequent edits to match the formatted output.
See: src/core/prompts/system-prompt/components/editing_files.ts:1

Multi-File Diff Generation

For complex changes across multiple files:
  • Preview all changes before applying
  • Accept or reject changes file-by-file
  • Smart conflict detection and resolution
  • Integration with VS Code diff viewer

Performance Optimization

Streaming Responses

All LLM interactions use streaming for:
  • Real-time feedback as the AI “thinks”
  • Partial message display in the UI
  • Early cancellation of problematic responses
  • Lower perceived latency
Implementation: src/core/task/StreamResponseHandler.ts:1

Request Retry Logic

Automatic retry with exponential backoff:
  • Handles rate limits gracefully
  • Recovers from transient network errors
  • Displays retry status in the UI
  • Configurable max attempts and delays
See: src/core/api/retry.ts:1

Best Practices

1

Be Specific

Provide clear, detailed task descriptions. The more context you give, the better the AI performs.
2

Use Attachments

Attach relevant files, images, or documents to provide visual context.
3

Leverage Memory Bank

Store project-specific knowledge for consistent behavior across sessions.
4

Monitor Context

Keep an eye on context usage. Use Focus Chain for long-running tasks.
5

Review Changes

Always review AI-generated code before committing, especially with auto-approve enabled.
Combine AI-powered coding with Experts for domain-specific code generation following your team’s best practices.

Troubleshooting

  • Check API key validity in Settings
  • Verify network connectivity
  • Review provider status pages
  • Check context window limits
  • Add custom instructions via .clinerules
  • Use Plan mode for complex tasks
  • Provide more context in your prompt
  • Try a more capable model
  • Enable Auto Compact feature
  • Use Focus Chain to maintain progress
  • Break task into smaller steps
  • Remove unnecessary files from context

Next Steps

Task Management

Integrate AI-generated tasks from Specif AI

Experts

Use domain experts for specialized code generation

MCP Integration

Extend AI capabilities with Model Context Protocol

Inline Editing

Make quick AI-assisted edits in your code

Build docs developers (and LLMs) love