How It Works
This page provides a detailed walkthrough of rs-tunnel’s internal mechanisms, from tunnel creation to automatic cleanup of stale sessions.
Tunnel Creation Flow
When a user runs rs-tunnel up --port 3000 --url my-app, here’s the complete flow:
Step-by-Step Process
CLI validates authentication
// apps/cli/src/commands/up.ts
const tokens = await loadTokens ();
if ( ! tokens ?. accessToken ) {
console . error ( 'Not logged in. Run: rs-tunnel login' );
process . exit ( 1 );
}
If the access token is expired, CLI automatically refreshes it using the refresh token.
CLI sends tunnel creation request
POST / v1 / tunnels
Authorization : Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9 ...
Content - Type : application / json
{
"port" : 3000 ,
"requestedSlug" : "my-app"
}
The request is validated against tunnelCreateRequestSchema from @ripeseed/shared.
API validates port number
// apps/api/src/services/tunnel.service.ts:31-33
if ( input . port < 1 || input . port > 65535 ) {
throw new AppError ( 400 , 'INVALID_PORT' , 'Port must be between 1 and 65535.' );
}
API enforces tunnel quota
// apps/api/src/services/tunnel.service.ts:35-36
const activeCount = await this . repository . countActiveTunnels ( input . userId );
assertWithinTunnelLimit ( activeCount , this . env . MAX_ACTIVE_TUNNELS );
If the user has 5 active tunnels and MAX_ACTIVE_TUNNELS=5, this throws: AppError ( 429 , 'TUNNEL_QUOTA_EXCEEDED' ,
'Cannot create tunnel: 5/5 active tunnels. Stop an existing tunnel first.' )
API reserves a slug
// apps/api/src/services/tunnel.service.ts:232-252
private async reserveSlug ( requestedSlug ?: string ): Promise < string > {
if ( requestedSlug ) {
const normalized = validateRequestedSlug ( requestedSlug );
const existing = await this . repository . findActiveTunnelBySlug ( normalized );
if ( existing ) {
throw new AppError ( 409 , 'TUNNEL_SLUG_CONFLICT' ,
'Requested URL slug is already in use.' );
}
return normalized ;
}
// Generate random slug
for ( let attempt = 0 ; attempt < 10; attempt + = 1 ) {
const candidate = generateRandomSlug ();
const existing = await this . repository . findActiveTunnelBySlug ( candidate );
if ( ! existing ) return candidate ;
}
throw new AppError (503, 'SLUG_EXHAUSTED' ,
'Unable to reserve a unique tunnel slug.' );
}
Slug validation rules (from apps/api/src/utils/slug.ts):
Must match /^[a-z0-9]([a-z0-9-]{0,30}[a-z0-9])?$/
1-32 characters
Lowercase alphanumeric + hyphens
Start and end with alphanumeric
API creates database tunnel record
// apps/api/src/services/tunnel.service.ts:41-46
const dbTunnel = await this . repository . createTunnel ({
userId: input . userId ,
slug ,
hostname: ` ${ slug } . ${ this . env . CLOUDFLARE_BASE_DOMAIN } ` ,
requestedPort: input . port ,
});
Initial tunnel state: {
id : 'uuid' ,
userId : 'user-uuid' ,
slug : 'my-app' ,
hostname : 'my-app.tunnel.company.com' ,
requestedPort : 3000 ,
status : 'creating' , // Initial state
cfTunnelId : null ,
cfDnsRecordId : null ,
createdAt : '2024-03-05T12:00:00Z'
}
API creates Cloudflare tunnel
// apps/api/src/services/tunnel.service.ts:52-54
const tunnelName = `rs- ${ slug } - ${ Date . now () } ` ;
const cfTunnel = await this . cloudflareService . createTunnel ( tunnelName );
cfTunnelId = cfTunnel . id ;
Cloudflare API call: POST https://api.cloudflare.com/client/v4/accounts/{account_id}/cfd_tunnel
Authorization: Bearer < CLOUDFLARE_API_TOKE N >
{
"name" : "rs-my-app-1709640000000",
"tunnel_secret" : "<base64-encoded-secret>"
}
API configures tunnel ingress rules
// apps/api/src/services/tunnel.service.ts:56-60
await this . cloudflareService . configureTunnel ({
tunnelId: cfTunnel . id ,
hostname ,
port: input . port ,
});
Cloudflare configuration: ingress :
- hostname : my-app.tunnel.company.com
service : http://localhost:3000
- service : http_status:404
API creates DNS CNAME record
// apps/api/src/services/tunnel.service.ts:62
cfDnsRecordId = await this . cloudflareService . createDnsRecord ( hostname , cfTunnel . id );
Cloudflare API call: POST https://api.cloudflare.com/client/v4/zones/{zone_id}/dns_records
{
"type" : "CNAME",
"name" : "my-app",
"content" : "{tunnel-id}.cfargotunnel.com",
"proxied" : true
}
DNS changes propagate globally within seconds due to Cloudflare’s edge network.
API retrieves tunnel token
// apps/api/src/services/tunnel.service.ts:63
const cloudflaredToken = await this . cloudflareService . getTunnelToken ( cfTunnel . id );
This token is tunnel-specific and allows cloudflared to connect.
API activates tunnel and creates lease
// apps/api/src/services/tunnel.service.ts:65-76
await this . repository . activateTunnel ({
tunnelId: dbTunnel . id ,
cfTunnelId: cfTunnel . id ,
cfDnsRecordId ,
});
const now = new Date ();
await this . repository . upsertLease (
dbTunnel . id ,
now ,
createLeaseExpiry ( now , this . env . LEASE_TIMEOUT_SEC ),
);
Updated tunnel state: {
status : 'active' , // Changed from 'creating'
cfTunnelId : 'uuid' ,
cfDnsRecordId : 'uuid'
}
Lease record: {
tunnelId : 'uuid' ,
lastHeartbeatAt : '2024-03-05T12:00:00Z' ,
expiresAt : '2024-03-05T12:01:00Z' // +60 seconds
}
API logs audit event
// apps/api/src/services/tunnel.service.ts:78-86
await this . repository . createAuditLog ({
userId: input . userId ,
action: 'tunnel.created' ,
metadata: {
tunnelId: dbTunnel . id ,
slug ,
hostname ,
},
});
API returns tunnel details
// apps/api/src/services/tunnel.service.ts:88-93
return {
tunnelId: dbTunnel . id ,
hostname ,
cloudflaredToken ,
heartbeatIntervalSec: 20 ,
};
Response validated against tunnelCreateResponseSchema.
CLI spawns cloudflared process
// apps/cli/src/commands/up.ts
const cloudflared = spawn ( 'cloudflared' , [
'tunnel' ,
'run' ,
'--token' , cloudflaredToken ,
'--url' , 'http://localhost:3000' ,
]);
cloudflared establishes outbound HTTPS connection to Cloudflare’s edge, no inbound ports needed.
CLI starts heartbeat loop
const heartbeatInterval = setInterval ( async () => {
try {
await fetch ( ` ${ apiUrl } /v1/tunnels/ ${ tunnelId } /heartbeat` , {
method: 'POST' ,
headers: { Authorization: `Bearer ${ accessToken } ` }
});
} catch ( error ) {
console . error ( 'Heartbeat failed:' , error );
}
}, 20000 ); // Every 20 seconds
CLI renders dashboard
Terminal output: rs-tunnel
Account: [email protected]
Version: 0.1.2
Region: SFO
Latency: 12ms
Forwarding: https://my-app.tunnel.company.com -> http://localhost:3000
Connections ttl opn rt1 rt5 p50 p90
156 2 45ms 38ms 52ms 89ms
HTTP Requests
12:34:56 GET / 200 OK 23ms
12:35:01 POST /api/users 201 Created 145ms
Rollback on Failure
If any step fails after DNS/tunnel creation, the API performs cleanup:
// apps/api/src/services/tunnel.service.ts:94-109
catch ( error ) {
await this . repository . markTunnelFailed (
dbTunnel . id ,
error instanceof Error ? error . message : 'Unknown error'
);
if ( cfDnsRecordId ) {
await this . cloudflareService . deleteDnsRecord ( cfDnsRecordId )
. catch (( cleanupError ) => {
logger . error ( 'Failed DNS rollback' , cleanupError );
});
}
if ( cfTunnelId ) {
await this . cloudflareService . deleteTunnel ( cfTunnelId )
. catch (( cleanupError ) => {
logger . error ( 'Failed tunnel rollback' , cleanupError );
});
}
throw error ;
}
If rollback cleanup fails, the tunnel remains in failed state but orphaned Cloudflare resources may need manual cleanup.
Heartbeat Mechanism
The heartbeat system ensures tunnels are automatically cleaned up when clients disconnect.
Heartbeat Loop (CLI)
// Simplified from apps/cli/src/commands/up.ts
const HEARTBEAT_INTERVAL = 20000 ; // 20 seconds
const heartbeat = async () => {
const response = await fetch (
` ${ apiUrl } /v1/tunnels/ ${ tunnelId } /heartbeat` ,
{
method: 'POST' ,
headers: {
'Authorization' : `Bearer ${ accessToken } ` ,
'Content-Type' : 'application/json'
}
}
);
if ( ! response . ok ) {
throw new Error ( `Heartbeat failed: ${ response . status } ` );
}
const data = await response . json ();
return data . expiresAt ; // When this lease expires
};
const intervalHandle = setInterval (() => {
heartbeat (). catch ( error => {
console . error ( 'Heartbeat error:' , error );
// CLI continues running, will retry on next interval
});
}, HEARTBEAT_INTERVAL );
// Clean up on exit
process . on ( 'SIGINT' , () => {
clearInterval ( intervalHandle );
// Also call stop tunnel API
});
Heartbeat Handler (API)
// apps/api/src/services/tunnel.service.ts:139-153
async heartbeat ( input : {
userId: string ;
tunnelIdentifier : string
}): Promise < { expiresAt : string } > {
const tunnel = await this . repository . findTunnelForUser (
input . userId ,
input . tunnelIdentifier
);
if (! tunnel || !ACTIVE_STATES.has(tunnel.status)) {
throw new AppError ( 404 , 'TUNNEL_NOT_FOUND' ,
'Tunnel was not found for this user.' );
}
const now = new Date ();
const expiresAt = createLeaseExpiry ( now , this . env . LEASE_TIMEOUT_SEC );
await this.repository.upsertLease(tunnel. id , now , expiresAt);
return { expiresAt : expiresAt . toISOString () };
}
Lease Expiry Calculation
// apps/api/src/utils/lease.ts
export function createLeaseExpiry (
lastHeartbeat : Date ,
timeoutSeconds : number
) : Date {
return new Date ( lastHeartbeat . getTime () + timeoutSeconds * 1000 );
}
With default LEASE_TIMEOUT_SEC=60:
Heartbeat at 12:00:00 → Lease expires at 12:01:00
Heartbeat at 12:00:20 → Lease expires at 12:01:20
If no heartbeat by 12:01:00, tunnel is stale
The 20-second heartbeat interval with 60-second timeout provides a 3x safety margin for network issues.
DNS Lifecycle Management
DNS records are tightly coupled to tunnel lifecycle:
DNS Record Creation
Done during tunnel creation (see Step 8 above):
// Cloudflare API call
POST / zones / { zone_id } / dns_records
{
"type" : "CNAME" ,
"name" : "my-app" ,
"content" : "{tunnel-id}.cfargotunnel.com" ,
"proxied" : true ,
"ttl" : 1 // Automatic when proxied=true
}
Why CNAME?
Points to Cloudflare’s tunnel infrastructure
proxied: true enables SSL, DDoS protection, caching
TTL is managed automatically by Cloudflare
DNS Record Deletion
Triggered during tunnel stop (manual or cleanup):
// apps/api/src/services/tunnel.service.ts:181-183
if ( tunnel . cfDnsRecordId ) {
await this . cloudflareService . deleteDnsRecord ( tunnel . cfDnsRecordId );
}
// Cloudflare API call
DELETE / zones / { zone_id } / dns_records / { record_id }
Propagation:
Removal is immediate in Cloudflare’s edge
DNS resolvers may cache for up to TTL (typically 1-5 minutes)
Users hitting the hostname after deletion receive Cloudflare 404/502
DNS Conflict Prevention
Unique index ensures no duplicate active hostnames:
-- apps/api/src/db/schema.ts:85-87
CREATE UNIQUE INDEX tunnels_hostname_idx
ON tunnels(hostname)
WHERE status != 'stopped' ;
Attempt to create my-app when already active:
AppError ( 409 , 'TUNNEL_SLUG_CONFLICT' , 'Requested URL slug is already in use.' )
After a tunnel is stopped, the slug becomes available immediately for new tunnels.
Cleanup & Reaper Worker
The reaper worker runs as a background process within the API runtime.
Reaper Initialization
// apps/api/src/index.ts (simplified)
import { ReaperWorker } from './workers/reaper.worker.js' ;
import { CleanupService } from './services/cleanup.service.js' ;
const cleanupService = new CleanupService ( repository , tunnelService );
const reaper = new ReaperWorker ( cleanupService , env . REAPER_INTERVAL_SEC );
reaper . start ();
process . on ( 'SIGTERM' , () => {
reaper . stop ();
// Graceful shutdown...
});
Reaper Tick Loop
// apps/api/src/workers/reaper.worker.ts:12-24
start (): void {
if ( this . intervalHandle ) return ;
this . intervalHandle = setInterval (() => {
this . tick (). catch (( error ) => {
logger . error ( 'Reaper tick failed' , error );
});
}, this . intervalSec * 1000 );
void this . tick (); // Run immediately on start
}
private async tick (): Promise < void > {
await this.cleanupService.sweepStaleLeases();
await this.cleanupService.processQueuedJobs();
}
With REAPER_INTERVAL_SEC=30, the reaper runs:
Immediately on API startup
Every 30 seconds thereafter
Stale Lease Sweep
// apps/api/src/services/cleanup.service.ts:12-17
async sweepStaleLeases (): Promise < void > {
const staleTunnelIds = await this . repository . findStaleTunnelIds ( new Date ());
await Promise.all(
staleTunnelIds.map((tunnelId) =>
this.repository.enqueueCleanupJob( tunnelId , 'stale_lease' )
)
);
}
-- apps/api/src/db/repository.ts (simplified query)
SELECT t . id
FROM tunnels t
INNER JOIN tunnel_leases l ON t . id = l . tunnel_id
WHERE t . status IN ( 'active' , 'stopping' )
AND l . expires_at < NOW ()
Stale tunnels are queued, not stopped immediately. This allows batching and retry logic.
Cleanup Job Processing
// apps/api/src/services/cleanup.service.ts:19-48
async processQueuedJobs (): Promise < void > {
const now = new Date ();
const jobs = await this . repository . claimDueJobs ( now , 25 );
for ( const job of jobs ) {
try {
await this . tunnelService . stopTunnelById (
job . tunnelId ,
`cleanup: ${ job . reason } `
);
await this . repository . markCleanupJobDone ( job . id );
} catch ( error ) {
const attemptCount = job . attemptCount + 1 ;
const backoffSeconds = calculateCleanupBackoffSeconds ( attemptCount );
const nextAttemptAt = addSeconds ( now , backoffSeconds );
const message = error instanceof Error
? error . message
: 'Unknown cleanup failure' ;
await this . repository . markCleanupJobFailed ({
jobId: job . id ,
attemptCount ,
nextAttemptAt ,
message ,
});
logger . error ( 'Cleanup job failed' , {
jobId: job . id ,
tunnelId: job . tunnelId ,
attemptCount ,
message ,
});
}
}
}
Job claiming:
SELECT * FROM cleanup_jobs
WHERE status = 'queued'
AND next_attempt_at <= NOW ()
ORDER BY next_attempt_at ASC
LIMIT 25
FOR UPDATE SKIP LOCKED
FOR UPDATE SKIP LOCKED prevents multiple reaper instances from claiming the same jobs.
Exponential Backoff
// apps/api/src/utils/time.ts
export function calculateCleanupBackoffSeconds ( attemptCount : number ) : number {
return Math . min ( 30 * Math . pow ( 2 , attemptCount - 1 ), 3600 );
}
Backoff schedule:
Attempt 1: 30 seconds
Attempt 2: 60 seconds
Attempt 3: 120 seconds
Attempt 4: 240 seconds
Attempt 5: 480 seconds
Attempt 6+: 3600 seconds (1 hour max)
Exponential backoff prevents hammering Cloudflare API when tunnels have active connections or transient errors.
Handling Active Connections
Cloudflare returns 409 Conflict when deleting tunnels with active connections:
// apps/api/src/services/tunnel.service.ts:185-202
if ( tunnel . cfTunnelId ) {
const result = await this . cloudflareService . deleteTunnelWithRetry (
tunnel . cfTunnelId
);
if ( ! result . success ) {
const cleanupReason = result . reason === 'active_connections'
? 'active_connections'
: 'deletion_failed' ;
await this . repository . enqueueCleanupJob ( tunnel . id , cleanupReason );
if ( result . reason === 'active_connections' ) {
logger . info ( 'Tunnel has active connections, will retry' , {
tunnelId: tunnel . id ,
cfTunnelId: tunnel . cfTunnelId ,
});
throw new AppError (
503 ,
'TUNNEL_STOP_PENDING_ACTIVE_CONNECTIONS' ,
'Tunnel has active connections and will be stopped once they drain.'
);
}
// ... handle other errors
}
}
The job will be retried after 30s, 60s, etc., until connections drain.
Cleanup Job States
type CleanupJobStatus =
| 'queued' // Waiting for next attempt
| 'processing' // Currently being handled
| 'done' // Successfully completed
| 'failed' // Errored, will retry
Jobs transition:
queued → processing (when claimed)
processing → done (on success)
processing → failed → queued (on error, increments attemptCount)
Cleanup jobs are never deleted. They remain in done or failed state for audit purposes.
Complete Stop Flow
When a user runs rs-tunnel stop my-app or presses Ctrl+C:
CLI sends stop request
DELETE / v1 / tunnels / : tunnelIdentifier
Authorization : Bearer < jwt >
API validates ownership
const tunnel = await this . repository . findTunnelForUser (
userId ,
tunnelIdentifier // Can be ID or hostname
);
if ( ! tunnel ) {
throw new AppError ( 404 , 'TUNNEL_NOT_FOUND' );
}
API marks tunnel as stopping
UPDATE tunnels
SET status = 'stopping' , updated_at = NOW ()
WHERE id = $ 1
API deletes DNS record
if ( tunnel . cfDnsRecordId ) {
await this . cloudflareService . deleteDnsRecord ( tunnel . cfDnsRecordId );
}
API deletes Cloudflare tunnel
if ( tunnel . cfTunnelId ) {
const result = await this . cloudflareService . deleteTunnelWithRetry (
tunnel . cfTunnelId
);
if ( ! result . success ) {
// Enqueue cleanup job for retry
await this . repository . enqueueCleanupJob ( tunnel . id , result . reason );
throw new AppError ( 503 , 'TUNNEL_STOP_PENDING_ACTIVE_CONNECTIONS' );
}
}
API deletes lease and marks stopped
await this . repository . deleteLease ( tunnel . id );
await this . repository . markTunnelStopped ( tunnel . id );
Final tunnel state: {
status : 'stopped' ,
stoppedAt : '2024-03-05T12:30:00Z'
}
API logs audit event
await this . repository . createAuditLog ({
userId: tunnel . userId ,
action: 'tunnel.stopped' ,
metadata: {
tunnelId: tunnel . id ,
reason: 'user_requested'
},
});
CLI terminates cloudflared
cloudflaredProcess . kill ( 'SIGTERM' );
clearInterval ( heartbeatInterval );
If the CLI crashes without calling stop, the tunnel becomes stale after 60 seconds and the reaper cleans it up.
Edge Cases & Safeguards
1. Network Interruption During Creation
Problem : DNS created but tunnel creation fails
Safeguard : Rollback logic in try/catch block deletes DNS and tunnel
catch ( error ) {
await markTunnelFailed ();
if ( cfDnsRecordId ) await deleteDnsRecord ( cfDnsRecordId );
if ( cfTunnelId ) await deleteTunnel ( cfTunnelId );
throw error ;
}
2. CLI Crashes Without Stopping
Problem : Tunnel keeps running in Cloudflare but CLI is gone
Safeguard : Heartbeat stops → lease expires → reaper enqueues cleanup → tunnel deleted
Timeline:
12:00:00 - CLI crashes
12:00:20 - Missed heartbeat
12:01:00 - Lease expires
12:01:30 - Reaper sweep detects stale tunnel
12:01:30 - Cleanup job queued
12:01:30 - Job processed, tunnel deleted
3. Cloudflare API Timeout
Problem : DNS deletion succeeds but tunnel deletion times out
Safeguard : Cleanup job enqueued with exponential backoff
if ( ! result . success ) {
await enqueueCleanupJob ( tunnel . id , 'deletion_failed' );
throw new AppError ( 502 , 'TUNNEL_CLOUDFLARE_DELETION_FAILED' );
}
4. Duplicate Slug Collision
Problem : Two users try to reserve same slug simultaneously
Safeguard : Database unique constraint
CREATE UNIQUE INDEX tunnels_hostname_idx
ON tunnels(hostname)
WHERE status != 'stopped' ;
Second request fails with:
AppError ( 409 , 'TUNNEL_SLUG_CONFLICT' , 'Requested URL slug is already in use.' )
5. Reaper Process Dies
Problem : API crashes during reaper operation
Safeguard :
Cleanup jobs remain in queued state with next_attempt_at timestamp
Next reaper startup immediately processes overdue jobs
FOR UPDATE SKIP LOCKED prevents duplicate processing if multiple APIs start
6. Quota Race Condition
Problem : User creates 6th tunnel while at 5/5 limit
Safeguard : Atomic count query in transaction
const activeCount = await repository . countActiveTunnels ( userId );
assertWithinTunnelLimit ( activeCount , MAX_ACTIVE_TUNNELS );
// If another creation happens here, the INSERT will still succeed
// but next request will be blocked
Not perfect, but acceptable since it’s enforced on every request.
For strict enforcement, use database-level constraint: CREATE TRIGGER enforce_tunnel_quota ...
Currently deferred to simplify implementation.
Database Indexes
Hot query paths have indexes:
-- Tunnel ownership lookup
CREATE INDEX tunnels_user_status_idx ON tunnels(user_id, status );
-- Slug conflict check
CREATE INDEX tunnels_slug_status_idx ON tunnels(slug, status );
-- Stale lease sweep
CREATE INDEX tunnel_leases_expires_at_idx ON tunnel_leases(expires_at);
-- Cleanup job claiming
CREATE INDEX cleanup_jobs_status_next_attempt_idx
ON cleanup_jobs( status , next_attempt_at);
API Endpoint Latencies (typical)
POST /v1/tunnels - 500-1500ms (Cloudflare API calls)
POST /v1/tunnels/:id/heartbeat - 5-20ms (simple DB update)
DELETE /v1/tunnels/:id - 300-1000ms (Cloudflare API calls)
GET /v1/tunnels - 10-50ms (DB query only)
Reaper Worker Load
Stale lease sweep: O(active_tunnels) - typically under 10ms
Cleanup job processing: O(jobs) - capped at 25 jobs per tick
With 100 active tunnels and 10% stale rate: ~10 jobs/minute
For deployments with >1000 concurrent tunnels, consider increasing REAPER_INTERVAL_SEC to reduce DB load.
Observability
Audit Logs
Every tunnel creation and stop is logged:
{
userId : 'uuid' ,
action : 'tunnel.created' ,
metadata : {
tunnelId : 'uuid' ,
slug : 'my-app' ,
hostname : 'my-app.tunnel.company.com'
},
createdAt : '2024-03-05T12:00:00Z'
}
Structured Logging
// apps/api/src/lib/logger.ts
logger . info ( 'Tunnel created' , {
tunnelId ,
userId ,
hostname ,
duration: Date . now () - startTime
});
logger . error ( 'Cleanup job failed' , {
jobId ,
tunnelId ,
attemptCount ,
message
});
Metrics Endpoints (Future Enhancement)
Proposed Prometheus-compatible metrics:
tunnels_active_total{user="[email protected] "}
tunnels_created_total
tunnels_stopped_total{reason="user_requested|stale_lease|cleanup"}
cleanup_jobs_pending
cleanup_jobs_failed_total{reason="active_connections|deletion_failed"}
Next Steps
Installation Set up rs-tunnel for local development or production
Configuration Environment variables and behavior tuning
API Reference Complete REST API documentation
Troubleshooting Common issues and debugging techniques