How Gitflare uses Cloudflare Durable Objects for stateful repository storage
Durable Objects are the core storage primitive in Gitflare, providing stateful, strongly consistent storage for Git repositories. Each repository gets its own Durable Object instance with dedicated SQLite storage.
Gitflare creates a Durable Object for each repository, identified by the full repository name (e.g., username/repo):
export function getRepoDOStub(fullRepoName: string) { const stub = (env.REPO as DurableObjectNamespace<RepoBase>).getByName( fullRepoName ); stub.setFullName(fullRepoName); return stub;}
This design provides:Isolation: Each repository has dedicated resources and storageConsistency: All operations for a repository are serialized through one instanceScalability: Adding repositories doesn’t affect existing onesSimplicity: No need for distributed locking or consensus protocols
Cloudflare automatically handles the creation, migration, and location of Durable Objects. You just call getByName() and the platform takes care of the rest.
Each layer has a specific responsibility:Git Service: High-level Git operations using isomorphic-gitIsoGitFs: Adapts DOFS to the Node.js fs.promises API that isomorphic-git expectsDOFS: Provides POSIX-like filesystem operations backed by SQLiteSQLite Storage: Cloudflare’s persistent, transactional storage
In-memory cache is lost when the Durable Object is evicted or migrated, but this is transparent to clients—it just means a cache miss and reload from SQLite.
Durable Objects provide single-threaded execution for each instance:
All requests to the same Durable Object are serialized
No need for locks or mutexes within the Durable Object
Requests from different Durable Objects run in parallel
This simplifies implementation significantly:
async receivePack(data: Uint8Array) { // No locking needed—this is the only request running // for this repository at this moment const { commands, packfile } = parseReceivePackRequest(data); // Write packfile await this.isoGitFs.promises.writeFile(packFilePath, packfile); // Index it await this.git.indexPack(packFilePath); // Update refs const results = await this.git.applyRefUpdates(commands, atomic); return buildReportStatus(results, true);}
You can write straightforward sequential code without worrying about race conditions.
While execution is single-threaded, the Durable Object can process requests very quickly. For most repositories, this provides sufficient throughput.
Cloudflare imposes some limits on Durable Objects:
Resource
Limit
Storage
5 GB per Durable Object
CPU time
30 seconds per request
Memory
128 MB per instance
Request size
100 MB
Gitflare is designed to work within these limits:
Storage: 5GB is sufficient for most repositories. Very large repos might need multiple Durable Objects (future enhancement)
CPU time: Packfile generation is optimized to complete quickly
Memory: DOFS streams large files from SQLite rather than loading into memory
Request size: Large pushes might need to be split (handled by Git client automatically)
For repositories approaching the 5GB limit, you can implement a strategy to offload old packfiles to R2 (Cloudflare’s object storage) while keeping recent objects in the Durable Object.