Stoneforge uses a sophisticated dual-storage model that balances query performance with git-friendliness and data durability.
The Dual Storage Principle
Core Principle: SQLite is the cache , JSONL is the source of truth .
This architecture provides:
SQLite: Performance
Fast indexed queries
Full-text search (FTS5)
Complex JOINs
Materialized views
Ephemeral - can be rebuilt
JSONL: Durability
Git-tracked source of truth
Append-only (no overwrites)
Human-readable diffs
Mergeable across branches
Durable - never deleted
Storage Architecture
File Structure
.stoneforge/
├── stoneforge.db # SQLite cache
└── sync/
├── elements.jsonl # Element source of truth
└── dependencies.jsonl # Dependency relationships
Data Flow Patterns
Write Path
When you create or update an element:
Write to SQLite
Data is immediately written to the SQLite database: const task = await api . create ({
type: 'task' ,
title: 'Implement feature X' ,
priority: 2 ,
createdBy: directorId ,
});
// Task is now queryable from SQLite
Benefits:
Immediate availability for queries
Fast indexed lookups
Complex filtering and sorting
Mark as Dirty
Element is flagged for export: INSERT INTO dirty_elements (element_id, marked_at)
VALUES ( 'el-abc123' , '2026-03-02T10:00:00.000Z' );
This tracks which elements need to be exported to JSONL.
Export to JSONL
Dirty elements are appended to JSONL: # Automatic export on write
sf sync export
# Or configure auto-export
# (happens automatically in most operations)
JSONL entry: { "id" : "el-abc123" , "type" : "task" , "title" : "Implement feature X" , "status" : "open" , "priority" : 2 , "complexity" : 3 , "taskType" : "task" , "createdAt" : "2026-03-02T10:00:00.000Z" , "updatedAt" : "2026-03-02T10:00:00.000Z" , "createdBy" : "en-director1" , "tags" :[], "metadata" :{}}
The write path is optimized for performance - SQLite writes are immediate, JSONL exports are batched.
Read Path
All queries go directly to SQLite:
// Fast indexed query - no JSONL involved
const readyTasks = await api . ready ();
const task = await api . get ( taskId );
const blockedTasks = await api . blocked ();
SQLite provides:
Indexed Lookups
Full-Text Search
Materialized Views
Complex Joins
-- Primary key index
SELECT * FROM elements WHERE id = 'el-abc123' ;
-- Status index
SELECT * FROM elements
WHERE type = 'task' AND status = 'open' ;
-- Assignee index
SELECT * FROM elements
WHERE assignee = 'en-worker1' ;
// FTS5 powered search
const results = await api . searchDocumentsFTS ( 'authentication' , {
hardCap: 10 ,
});
Uses SQLite FTS5 with BM25 ranking: SELECT * FROM documents_fts
WHERE documents_fts MATCH 'authentication'
ORDER BY rank;
The blocked cache is a materialized view: CREATE TABLE blocked_cache (
element_id TEXT PRIMARY KEY ,
is_blocked INTEGER NOT NULL ,
updated_at TEXT NOT NULL
);
Recomputed when dependencies change: await api . addDependency ({
blockerId: taskAId ,
blockedId: taskBId ,
type: 'blocks' ,
createdBy: actorId ,
});
// blocked_cache updated automatically
// Get tasks with their dependencies
const tasksWithDeps = await api . getTasksWithDependencies ();
SQL join: SELECT
e. * ,
d . blocker_id ,
d . type AS dep_type
FROM elements e
LEFT JOIN dependencies d ON e . id = d . blocked_id
WHERE e . type = 'task' ;
Sync and Merge Path
When branches diverge, JSONL files can be merged:
Branch Divergence
Two agents work in parallel branches: # Branch A: Worker 1 creates task-1
git checkout -b feature/auth
# .stoneforge/sync/elements.jsonl updated
# Branch B: Worker 2 creates task-2
git checkout -b feature/api
# .stoneforge/sync/elements.jsonl updated
JSONL Divergence
Each branch has different JSONL content: Branch A: { "id" : "el-task1" , "title" : "Add auth" , ... }
Branch B: { "id" : "el-task2" , "title" : "Add API" , ... }
Git Merge
When branches merge, JSONL files are automatically merged by git: git checkout main
git merge feature/auth
git merge feature/api
Merged JSONL: { "id" : "el-task1" , "title" : "Add auth" , ... }
{ "id" : "el-task2" , "title" : "Add API" , ... }
SQLite Rebuild
After merge, rebuild SQLite from JSONL: SQLite is cleared and repopulated from JSONL source of truth.
Always import after a git merge to ensure SQLite reflects the merged state.
Conflict Resolution
When two branches modify the same element, Stoneforge resolves conflicts automatically:
Conflict Detection
Stoneforge uses content hashing to detect conflicts:
// Each element has a content hash
const hash = hashElement ( element );
// Stored in metadata
element . metadata . _contentHash = hash ;
During import, if the same element ID appears with different hashes, a conflict exists.
Resolution Strategy
Last-Write-Wins
Terminal Status Priority
Manual Resolution
Default strategy - newer timestamp wins:if ( elementA . updatedAt > elementB . updatedAt ) {
// Keep element A
return elementA ;
} else {
// Keep element B
return elementB ;
}
Example: // Branch A: Updated at 10:00
{ "id" : "el-task1" , "status" : "open" , "updatedAt" : "2026-03-02T10:00:00.000Z" }
// Branch B: Updated at 10:05
{ "id" : "el-task1" , "status" : "in_progress" , "updatedAt" : "2026-03-02T10:05:00.000Z" }
// Result: Branch B wins
{ "id" : "el-task1" , "status" : "in_progress" , "updatedAt" : "2026-03-02T10:05:00.000Z" }
Special rule - closed/tombstone always wins:if ( elementA . status === 'closed' || elementA . status === 'tombstone' ) {
// Keep element A regardless of timestamp
return elementA ;
}
Example: // Branch A: Closed at 10:00
{ "id" : "el-task1" , "status" : "closed" , "updatedAt" : "2026-03-02T10:00:00.000Z" }
// Branch B: Reopened at 10:05
{ "id" : "el-task1" , "status" : "open" , "updatedAt" : "2026-03-02T10:05:00.000Z" }
// Result: Branch A wins (closed status preserved)
{ "id" : "el-task1" , "status" : "closed" , "updatedAt" : "2026-03-02T10:00:00.000Z" }
For complex conflicts, manual resolution may be needed: # Import with conflict detection
sf sync import --detect-conflicts
# Review conflicts
sf sync conflicts
# Resolve manually
sf sync resolve < element-i d > --keep < versio n >
Each line is a complete JSON object:
{ "id" : "el-abc123" , "type" : "task" , "title" : "Implement feature X" , "status" : "open" , "priority" : 2 , "complexity" : 3 , "taskType" : "task" , "createdAt" : "2026-03-02T10:00:00.000Z" , "updatedAt" : "2026-03-02T10:00:00.000Z" , "createdBy" : "en-director1" , "tags" :[], "metadata" :{}}
{ "blockedId" : "el-task2" , "blockerId" : "el-task1" , "type" : "blocks" , "createdAt" : "2026-03-02T10:00:00.000Z" , "createdBy" : "en-director1" , "metadata" :{}}
Update Semantics
JSONL is append-only - updates append a new version:
{ "id" : "el-task1" , "status" : "open" , "updatedAt" : "2026-03-02T10:00:00.000Z" , ... }
{ "id" : "el-task1" , "status" : "in_progress" , "updatedAt" : "2026-03-02T10:05:00.000Z" , ... }
{ "id" : "el-task1" , "status" : "review" , "updatedAt" : "2026-03-02T10:10:00.000Z" , ... }
During import, later entries override earlier ones for the same ID.
SQLite Schema
Core Tables
Elements Table
Dependencies Table
Blocked Cache
Dirty Tracking
CREATE TABLE elements (
id TEXT PRIMARY KEY ,
type TEXT NOT NULL ,
data TEXT NOT NULL , -- JSON blob
created_at TEXT NOT NULL ,
updated_at TEXT NOT NULL ,
created_by TEXT NOT NULL ,
-- Indexed columns for fast queries
status TEXT ,
priority INTEGER ,
assignee TEXT ,
owner TEXT ,
deleted_at TEXT
);
CREATE INDEX idx_elements_type ON elements ( type );
CREATE INDEX idx_elements_status ON elements ( status );
CREATE INDEX idx_elements_assignee ON elements (assignee);
CREATE INDEX idx_elements_created_at ON elements (created_at);
FTS5 Tables
Full-text search for documents:
CREATE VIRTUAL TABLE documents_fts USING fts5(
element_id UNINDEXED,
title,
content,
tags,
category,
content = 'elements' ,
content_rowid = 'rowid' ,
tokenize = 'porter'
);
FTS5 uses Porter stemming for better search quality (e.g., “running” matches “run”).
Cross-Runtime Support
Stoneforge supports multiple JavaScript runtimes with different SQLite backends:
Bun
Node.js
Browser
Auto-Detection
Native SQLite via bun:sqlite import { Database } from 'bun:sqlite' ;
import { createStorage } from '@stoneforge/storage' ;
const storage = createStorage ( '.stoneforge/stoneforge.db' , {
backend: 'bun' ,
});
Performance:
Fastest option (native binding)
Zero npm dependencies
Built into Bun runtime
better-sqlite3 package import Database from 'better-sqlite3' ;
import { createStorage } from '@stoneforge/storage' ;
const storage = createStorage ( '.stoneforge/stoneforge.db' , {
backend: 'node' ,
});
Performance:
Production-ready
Synchronous API
Native binding (requires compilation)
sql.js WASM import initSqlJs from 'sql.js' ;
import { createStorage } from '@stoneforge/storage' ;
const SQL = await initSqlJs ();
const storage = createStorage ( ':memory:' , {
backend: 'browser' ,
sqlJs: SQL ,
});
Performance:
Pure JavaScript (no native deps)
Runs in browser
Slower than native
Automatic runtime detection import { createStorage } from '@stoneforge/storage' ;
// Automatically selects best backend
const storage = createStorage ( '.stoneforge/stoneforge.db' );
// Detection order:
// 1. Bun (if Bun.sqlite exists)
// 2. Node (if better-sqlite3 available)
// 3. Browser (if sql.js available)
Operation Complexity Notes Get by ID O(1) Primary key index List by status O(log n) Status index Ready tasks O(n) Filtered scan with blocked cache FTS search O(log n) FTS5 BM25 ranking Dependency traversal O(d) d = dependency depth
Operation Complexity Notes Create element O(1) Single INSERT Update element O(1) Single UPDATE Add dependency O(d) d = affected tasks for cache update Delete element O(1) Soft delete (UPDATE)
Storage Size
SQLite Approximate sizes:
1,000 tasks: ~500 KB
10,000 tasks: ~5 MB
100,000 tasks: ~50 MB
Includes indexes and FTS tables.
JSONL Approximate sizes:
1,000 tasks: ~300 KB
10,000 tasks: ~3 MB
100,000 tasks: ~30 MB
Compact JSON, no pretty-printing.
Backup and Recovery
Backup Strategy
Since JSONL is the source of truth and git-tracked:
# Backup is automatic with git
git add .stoneforge/sync/
git commit -m "Backup Stoneforge data"
git push
# SQLite is ephemeral and can be deleted
rm .stoneforge/stoneforge.db
Recovery
Clone Repository
git clone < repo-ur l >
cd < rep o >
Initialize Stoneforge
Creates empty SQLite database.
Import from JSONL
Rebuilds SQLite from JSONL source of truth.
Verify
sf stats
# Should show correct counts
SQLite can always be rebuilt from JSONL. Never back up only SQLite.
Best Practices
1. Always sync after git operations
git pull
sf sync import # Rebuild SQLite from updated JSONL
git merge feature-branch
sf sync import # Rebuild after merge
2. Commit JSONL regularly
# After significant work
git add .stoneforge/sync/
git commit -m "Update task state"
git push
3. Never edit JSONL manually
JSONL files are machine-generated. Use the API: // Correct
await api . update ( taskId , { status: 'closed' });
// Incorrect
// Don't edit .stoneforge/sync/elements.jsonl directly
4. Rebuild SQLite when in doubt
# If SQLite seems out of sync
rm .stoneforge/stoneforge.db
sf sync import
5. Use transactions for bulk operations
await storage . transaction (() => {
for ( const task of tasks ) {
api . create ( task );
}
});
// All-or-nothing atomicity
Next Steps
Architecture See how storage fits into overall system
Task Management Learn about task data and queries
Dependencies Understand dependency storage
Workflows Work with workflow state