Skip to main content

Overview

The parallel-agents skill enables coordinating multiple specialized agents through Antigravity’s native agent system. It provides patterns for orchestrating agents to handle complex tasks requiring multiple expertise domains or comprehensive analysis from multiple perspectives.

What This Skill Provides

  • Native agent invocation: Using Antigravity’s built-in Agent Tool
  • Orchestration patterns: Proven patterns for multi-agent coordination
  • Sequential and parallel execution: When to run agents in sequence vs parallel
  • Context passing: Sharing findings between agents
  • Synthesis protocols: Combining results from multiple agents
  • 17 specialized agents: Available agents and their expertise domains

When to Use Orchestration

Good for:
  • Complex tasks requiring multiple expertise domains
  • Code analysis from security, performance, and quality perspectives
  • Comprehensive reviews (architecture + security + testing)
  • Feature implementation needing backend + frontend + database work
Not for:
  • Simple, single-domain tasks
  • Quick fixes or small changes
  • Tasks where one agent suffices

Native Agent Invocation

Single Agent

Use the security-auditor agent to review authentication

Sequential Chain

First, use the explorer-agent to discover project structure.
Then, use the backend-specialist to review API endpoints.
Finally, use the test-engineer to identify test gaps.

With Context Passing

Use the frontend-specialist to analyze React components.
Based on those findings, have the test-engineer generate component tests.

Resume Previous Work

Resume agent [agentId] and continue with additional requirements.

Orchestration Patterns

Pattern 1: Comprehensive Analysis

Agents: explorer-agent → [domain-agents] → synthesis

1. explorer-agent: Map codebase structure
2. security-auditor: Security posture
3. backend-specialist: API quality
4. frontend-specialist: UI/UX patterns
5. test-engineer: Test coverage
6. Synthesize all findings

Pattern 2: Feature Review

Agents: affected-domain-agents → test-engineer

1. Identify affected domains (backend? frontend? both?)
2. Invoke relevant domain agents
3. test-engineer verifies changes
4. Synthesize recommendations

Pattern 3: Security Audit

Agents: security-auditor → penetration-tester → synthesis

1. security-auditor: Configuration and code review
2. penetration-tester: Active vulnerability testing
3. Synthesize with prioritized remediation

Available Agents

AgentExpertiseTrigger Phrases
orchestratorCoordination”comprehensive”, “multi-perspective”
security-auditorSecurity”security”, “auth”, “vulnerabilities”
penetration-testerSecurity Testing”pentest”, “red team”, “exploit”
backend-specialistBackend”API”, “server”, “Node.js”, “Express”
frontend-specialistFrontend”React”, “UI”, “components”, “Next.js”
test-engineerTesting”tests”, “coverage”, “TDD”
devops-engineerDevOps”deploy”, “CI/CD”, “infrastructure”
database-architectDatabase”schema”, “Prisma”, “migrations”
mobile-developerMobile”React Native”, “Flutter”, “mobile”
api-designerAPI Design”REST”, “GraphQL”, “OpenAPI”
debuggerDebugging”bug”, “error”, “not working”
explorer-agentDiscovery”explore”, “map”, “structure”
documentation-writerDocumentation”write docs”, “create README”
performance-optimizerPerformance”slow”, “optimize”, “profiling”
project-plannerPlanning”plan”, “roadmap”, “milestones”
seo-specialistSEO”SEO”, “meta tags”, “search ranking”
game-developerGame Development”game”, “Unity”, “Godot”, “Phaser”

Antigravity Built-in Agents

These work alongside custom agents:
AgentModelPurpose
ExploreHaikuFast read-only codebase search
PlanSonnetResearch during plan mode
General-purposeSonnetComplex multi-step modifications
Use Explore for quick searches, custom agents for domain expertise.

Synthesis Protocol

After all agents complete, synthesize:
## Orchestration Synthesis

### Task Summary
[What was accomplished]

### Agent Contributions
| Agent | Finding |
|-------|--------|
| security-auditor | Found X |
| backend-specialist | Identified Y |

### Consolidated Recommendations
1. **Critical**: [Issue from Agent A]
2. **Important**: [Issue from Agent B]
3. **Nice-to-have**: [Enhancement from Agent C]

### Action Items
- [ ] Fix critical security issue
- [ ] Refactor API endpoint
- [ ] Add missing tests

Best Practices

  1. Choose relevant agents - 17 specialized agents available
  2. Logical order - Discovery → Analysis → Implementation → Testing
  3. Share context - Pass relevant findings to subsequent agents
  4. Single synthesis - One unified report, not separate outputs
  5. Verify changes - Always include test-engineer for code modifications

Key Benefits

  • Single session - All agents share context
  • AI-controlled - Claude orchestrates autonomously
  • Native integration - Works with built-in Explore, Plan agents
  • Resume support - Can continue previous agent work
  • Context passing - Findings flow between agents

Use Cases

  • Comprehensive codebase analysis
  • Multi-domain feature implementation
  • Security audits with multiple perspectives
  • Full-stack application building
  • Code quality reviews from multiple angles
  • Performance optimization across layers
  • Migration planning and execution

Which Agents Use This Skill

  • orchestrator - Primary orchestrator agent that coordinates other agents
  • Other agents can invoke specialist agents when needed

Sequential vs Parallel Execution

Sequential (dependencies)

1. database-architect (must complete first)

2. backend-specialist (needs schema)

3. frontend-specialist (needs API)

4. test-engineer (needs all code)

Parallel (independent)

After core implementation:
├─ security-auditor (code review)
├─ performance-optimizer (bundle analysis)
└─ test-engineer (test generation)

[All run simultaneously]

Context Passing Techniques

  1. Explicit context: “Based on the security findings…”
  2. File references: “Review the files identified by explorer-agent”
  3. Shared artifacts: Reference files created by previous agents
  4. Issue tracking: Pass list of issues to next agent
  5. Decision logs: Share architectural decisions

Error Handling in Orchestration

  • If an agent fails, log error and continue with other agents
  • Critical dependencies should block subsequent agents
  • Provide partial synthesis even if some agents fail
  • Document which agents completed successfully
  • Offer to retry failed agents with different parameters

Build docs developers (and LLMs) love