What Are Hands?
Hands are OpenFang’s core innovation — pre-built autonomous capability packages that run independently, on schedules, without you having to prompt them. This is not a chatbot. A Hand wakes up at 6 AM, researches your competitors, builds a knowledge graph, scores the findings, and delivers a report to your Telegram before you’ve had coffee.Hands vs Agents
| Aspect | Traditional Agents | Hands |
|---|---|---|
| Interaction | You chat with them | They work for you |
| Activation | You spawn manually | You activate from marketplace |
| Operation | Reactive (wait for messages) | Autonomous (run on schedules) |
| Configuration | Write manifest by hand | Settings UI with validation |
| Purpose | General-purpose assistant | Domain-complete capability package |
| Expertise | Depends on your prompt | 500+ word operational playbook + SKILL.md |
| Guardrails | You implement | Built-in approval gates |
Think of Hands as autonomous employees and traditional agents as chatbot assistants.
HAND.toml Format
Every Hand is defined by aHAND.toml manifest:
HAND.toml
HAND.toml Fields Reference
HAND.toml Fields Reference
| Field | Type | Required | Description |
|---|---|---|---|
id | string | ✅ | Unique hand identifier |
name | string | ✅ | Human-readable name |
description | string | ✅ | What this Hand does |
category | HandCategory | ✅ | content / security / productivity / development / communication / data |
icon | string | — | Icon emoji |
tools | Vec<String> | — | Tools the agent needs |
skills | Vec<String> | — | Skill allowlist (empty = all) |
mcp_servers | Vec<String> | — | MCP server allowlist |
requires | Vec<HandRequirement> | — | Requirements to satisfy |
settings | Vec<HandSetting> | — | Configurable settings |
agent | HandAgentConfig | ✅ | Agent manifest template |
dashboard | HandDashboard | — | Metrics schema |
The 7 Bundled Hands
OpenFang ships with 7 production-ready Hands:Clip — YouTube to Vertical Shorts
Clip — YouTube to Vertical Shorts
What it does: Takes a YouTube URL, downloads it, identifies the best moments, cuts them into vertical shorts with captions and thumbnails, optionally adds AI voice-over, and publishes to Telegram/WhatsApp.8-phase pipeline:
- Download video with yt-dlp
- Transcribe audio (5 STT backends: Groq/OpenAI/Deepgram/AssemblyAI/Local)
- Identify highlights using LLM analysis of transcript
- Extract clips with FFmpeg (timestamps + vertical crop)
- Add captions (burned into video)
- Generate thumbnails (first frame extraction)
- (Optional) Add AI voice-over with ElevenLabs/OpenAI TTS
- Deliver to configured destination
openfang hand activate clipLead — Autonomous Lead Generation
Lead — Autonomous Lead Generation
What it does: Runs daily. Discovers prospects matching your ICP (Ideal Customer Profile), enriches them with web research, scores 0-100, deduplicates against your existing database, and delivers qualified leads in CSV/JSON/Markdown.Pipeline:
- Load ICP profile from memory (or prompt you to define it)
- Search for prospects (LinkedIn, company directories, industry databases)
- Enrich each prospect (website, tech stack, funding, team size, etc.)
- Score against ICP criteria (0-100 scale)
- Deduplicate against existing leads (memory store)
- Rank by score
- Export top N leads
- Store enrichment data for future runs
openfang hand activate leadCollector — OSINT Intelligence
Collector — OSINT Intelligence
What it does: You give it a target (company, person, topic). It monitors continuously — change detection, sentiment tracking, knowledge graph construction, and critical alerts when something important shifts.Pipeline:
- Initial deep scan (web scraping, social media, public records)
- Build knowledge graph (entities + relations)
- Continuous monitoring (checks every N hours)
- Change detection (diff against previous state)
- Sentiment analysis (positive/negative/neutral trend)
- Critical alert triggers (keywords, thresholds)
- Report generation (daily/weekly summaries)
openfang hand activate collectorPredictor — Superforecasting Engine
Predictor — Superforecasting Engine
What it does: Collects signals from multiple sources, builds calibrated reasoning chains, makes predictions with confidence intervals, and tracks its own accuracy using Brier scores.Method:
- Frame prediction question clearly
- Gather base rates (historical data)
- Collect current signals (news, trends, indicators)
- Build reasoning chains (pro/con arguments)
- Assign confidence intervals (e.g., 60-75% likely)
- (Optional) Run contrarian mode (deliberately argue against consensus)
- Track outcome when prediction resolves
- Update Brier score (calibration metric)
openfang hand activate predictorResearcher — Deep Autonomous Research
Researcher — Deep Autonomous Research
What it does: Deep autonomous researcher. Cross-references multiple sources, evaluates credibility using CRAAP criteria (Currency, Relevance, Authority, Accuracy, Purpose), generates cited reports with APA formatting, supports multiple languages.CRAAP credibility checks:
- Currency: How recent is the information?
- Relevance: How relevant to your question?
- Authority: Who is the author? What are their credentials?
- Accuracy: Can it be verified? Are sources cited?
- Purpose: Why does this information exist? Bias?
openfang hand activate researcherTwitter — Autonomous Account Manager
Twitter — Autonomous Account Manager
What it does: Autonomous Twitter/X account manager. Creates content in 7 rotating formats, schedules posts for optimal engagement, responds to mentions, tracks performance metrics.7 content formats:
- Thread (deep dive)
- Quote tweet (commentary)
- Poll (engagement)
- Image + caption (visual)
- Video clip (from YouTube via Clip Hand)
- Link + summary (curation)
- Short take (opinion)
openfang hand activate twitterBrowser — Web Automation
Browser — Web Automation
What it does: Web automation agent. Navigates sites, fills forms, clicks buttons, handles multi-step workflows. Uses Playwright bridge with session persistence.Mandatory purchase approval gate: It will never spend your money without explicit confirmation.Capabilities:
- Navigate to URLs
- Fill forms (text inputs, dropdowns, checkboxes)
- Click buttons and links
- Handle multi-step workflows (login → search → filter → extract)
- Screenshot capture
- Session persistence (cookies, localStorage)
openfang hand activate browserLifecycle Management
Activate a Hand
-
Requirement checks run:
- Binary existence (e.g.,
ffmpegon PATH) - Environment variables (e.g.,
GROQ_API_KEYset) - API key validity (optional ping to provider)
- Binary existence (e.g.,
-
Settings resolution:
- Your config merged with defaults
- Generates prompt block (appended to system prompt)
- Collects required env vars for subprocess
-
Agent spawn:
- Kernel spawns agent with HandDefinition → AgentManifest conversion
- Capabilities granted based on
toolslist - Skills and MCP servers filtered by allowlists
- Workspace created at
~/.openfang/workspaces/{hand_id}-{instance_id}/
-
Instance registered:
HandInstancesaved to registry- Status set to
Active - Dashboard metrics wired to memory keys
- Background loop starts (if schedule mode is continuous/periodic/proactive)
crates/openfang-hands/src/registry.rs:activate_hand()
Pause a Hand
Paused. Background loop stops. State persisted.
Resume a Hand
Active. Background loop restarts.
Deactivate a Hand
Check Hand Status
- Instance ID
- Status (Active/Paused/Error/Inactive)
- Agent ID
- Agent name
- Activated at timestamp
- Dashboard metrics (if any)
Settings System
Hands can declare configurable settings:Setting Types
Select
Select
Dropdown menu with predefined options.Resolves to:
- Prompt block:
- Speech-to-Text Provider: Groq Whisper (groq) - Env vars:
["GROQ_API_KEY"](passed to subprocess)
Toggle
Toggle
Boolean on/off switch.Resolves to:
- Enable AI Voice-Over: Enabled (if true)Text
Text
Free-form text input.Resolves to:
- Custom Outro Text: Thanks for watching!Resolution Example
Given this config:["GROQ_API_KEY", "CUSTOM_OUTRO"]
Code reference: crates/openfang-hands/src/lib.rs:resolve_settings()
Dashboard Metrics
Hands can declare metrics displayed on the dashboard:memory_store tool:
"number"→42"duration"→2h 15m 30s"bytes"→1.2 GB
Building Your Own Hand
-
Create
HAND.toml: -
Define metadata and requirements:
-
Write system prompt (500+ words operational playbook):
-
Add dashboard metrics:
-
Test locally:
-
Publish to FangHub:
Next Steps
Activate Your First Hand
Step-by-step guide to activating Clip Hand
Build a Custom Hand
Create your own autonomous capability package
Memory System
Learn how Hands store and recall data
Security Model
Understand approval gates and guardrails
