Overview
rs-tunnel provides real-time monitoring through heartbeats, telemetry ingestion, and live metrics tracking. The system monitors tunnel health through a lease-based mechanism that automatically detects and cleans up stale tunnels.
Heartbeat Mechanism
The CLI sends periodic heartbeats to maintain an active lease for each tunnel.
Heartbeat Interval
HEARTBEAT_INTERVAL_SEC: z.coerce.number().int().positive().default(20)
The heartbeat interval is 20 seconds by default. The CLI sends a heartbeat request to the API every 20 seconds to indicate the tunnel is still active.
The heartbeat interval is returned when creating a tunnel and should be used by the CLI to schedule periodic lease renewals.
Heartbeat Endpoint
From apps/api/src/services/tunnel.service.ts:139-153:
async heartbeat(input: { userId: string; tunnelIdentifier: string }): Promise<{ expiresAt: string }> {
const tunnel = await this.repository.findTunnelForUser(input.userId, input.tunnelIdentifier);
if (!tunnel || !ACTIVE_STATES.has(tunnel.status)) {
throw new AppError(404, 'TUNNEL_NOT_FOUND', 'Tunnel was not found for this user.');
}
const now = new Date();
const expiresAt = createLeaseExpiry(now, this.env.LEASE_TIMEOUT_SEC);
await this.repository.upsertLease(tunnel.id, now, expiresAt);
return {
expiresAt: expiresAt.toISOString(),
};
}
Lease Timeout
LEASE_TIMEOUT_SEC: z.coerce.number().int().positive().default(60)
Tunnel leases expire after 60 seconds of inactivity (default). If the CLI fails to send a heartbeat within this window, the tunnel is marked as stale and will be cleaned up by the reaper worker.
If a tunnel lease expires, the reaper will automatically delete the tunnel and its DNS record. Ensure heartbeats are sent reliably.
Lease Creation
From apps/api/src/utils/lease.ts:3-5:
export function createLeaseExpiry(now: Date, timeoutSec: number): Date {
return addSeconds(now, timeoutSec);
}
Leases are automatically created when:
- A tunnel is first created
- A heartbeat is received (lease is renewed)
Checking Tunnel Status
You can check tunnel status and lease information using the list tunnels endpoint.
List Active Tunnels
From apps/api/src/services/tunnel.service.ts:113-137:
async listTunnels(userId: string, options?: { includeInactive?: boolean }): Promise<TunnelSummary[]> {
const includeInactive = options?.includeInactive ?? false;
const rows = await this.repository.listUserTunnelsWithLease(userId, { includeInactive });
return rows.map(({ tunnel, lease }) => {
const leaseSummary: TunnelLeaseSummary = lease
? {
lastHeartbeatAt: lease.lastHeartbeatAt.toISOString(),
expiresAt: lease.expiresAt.toISOString(),
}
: null;
return {
id: tunnel.id,
hostname: tunnel.hostname,
slug: tunnel.slug,
status: tunnel.status,
requestedPort: tunnel.requestedPort,
createdAt: tunnel.createdAt.toISOString(),
lease: leaseSummary,
stoppedAt: tunnel.stoppedAt ? tunnel.stoppedAt.toISOString() : null,
lastError: tunnel.lastError ?? null,
};
});
}
The response includes lease information:
lastHeartbeatAt: Timestamp of the last heartbeat received
expiresAt: When the lease expires if no heartbeat is received
Telemetry Ingestion
The CLI sends telemetry data to the API to track connection metrics and request logs.
Telemetry Rate Limit
From apps/api/src/routes/telemetry.ts:20-24:
config: {
rateLimit: {
max: 1200,
timeWindow: '1 minute',
},
}
Telemetry ingestion is rate-limited to 1200 requests per minute per user.
Ingestion Endpoint
POST /tunnels/:id/telemetry
From apps/api/src/services/telemetry.service.ts:97-156:
async ingestTelemetry(input: {
userId: string;
tunnelIdentifier: string;
region?: string | null;
metrics: TunnelTelemetryMetrics;
requests: TunnelTelemetryRequestEvent[];
}): Promise<void> {
const tunnel = await this.repository.findTunnelForUser(input.userId, input.tunnelIdentifier);
if (!tunnel) {
throw new AppError(404, 'TUNNEL_NOT_FOUND', 'Tunnel was not found for this user.');
}
const now = new Date();
const nowMs = now.getTime();
const region = normalizeRegion(input.region);
const metrics = normalizeMetrics(input.metrics);
await this.repository.upsertLiveTelemetry({
tunnelId: tunnel.id,
receivedAt: now,
region,
...metrics,
});
const lastPointAtMs = this.lastMetricsPointAtMs.get(tunnel.id) ?? 0;
if (nowMs - lastPointAtMs >= METRICS_DOWNSAMPLE_MS) {
await this.repository.insertMetricsPoint({
tunnelId: tunnel.id,
capturedAt: now,
...metrics,
});
this.lastMetricsPointAtMs.set(tunnel.id, nowMs);
}
// ... request log insertion
}
Metrics are downsampled to one point every 10 seconds to reduce database load.
Dashboard Metrics
The CLI displays an ngrok-style dashboard with real-time metrics.
Metrics Definitions
From apps/cli/src/lib/tunnel-stats.ts:11-18:
export type TunnelStatsSnapshot = {
ttl: number; // Total connections
opn: number; // Open connections
rt1Ms: number | null; // Average response time (1 minute)
rt5Ms: number | null; // Average response time (5 minutes)
p50Ms: number | null; // 50th percentile latency (5 minutes)
p90Ms: number | null; // 90th percentile latency (5 minutes)
};
| Metric | Description | Time Window |
|---|
ttl | Total connections since tunnel started | All time |
opn | Currently open connections | Current |
rt1 | Average response time | Last 1 minute |
rt5 | Average response time | Last 5 minutes |
p50 | 50th percentile latency (median) | Last 5 minutes |
p90 | 90th percentile latency | Last 5 minutes |
Calculating Metrics
From apps/cli/src/lib/tunnel-stats.ts:60-89:
getSnapshot(nowEpochMs: number = Date.now()): TunnelStatsSnapshot {
this.prune(nowEpochMs);
const oneMinuteCutoff = nowEpochMs - ONE_MINUTE_MS;
const durationsIn1m: number[] = [];
const durationsIn5m: number[] = [];
for (let i = this.sampleStartIndex; i < this.latencySamples.length; i += 1) {
const sample = this.latencySamples[i];
if (!sample) {
continue;
}
durationsIn5m.push(sample.durationMs);
if (sample.timestampEpochMs >= oneMinuteCutoff) {
durationsIn1m.push(sample.durationMs);
}
}
const sortedIn5m = [...durationsIn5m].sort((a, b) => a - b);
return {
ttl: this.totalConnections,
opn: this.openConnections,
rt1Ms: average(durationsIn1m),
rt5Ms: average(durationsIn5m),
p50Ms: percentile(sortedIn5m, 0.5),
p90Ms: percentile(sortedIn5m, 0.9),
};
}
Metrics are calculated from the local proxy, not from Cloudflare. They represent the latency between the proxy and your local service.
Data Retention
From apps/api/src/services/telemetry.service.ts:14-16:
const REQUEST_RETENTION_MS = 24 * 60 * 60 * 1000; // 24 hours
const METRICS_RETENTION_MS = 7 * 24 * 60 * 60 * 1000; // 7 days
- Request logs: Retained for 24 hours
- Metrics points: Retained for 7 days
- Live telemetry: Latest snapshot only (replaced on each ingestion)
Old telemetry data is automatically pruned every 10 minutes to keep database size manageable.
Retrieving Telemetry
Live Telemetry
Returns the latest telemetry snapshot for all active tunnels.
Metrics History
GET /tunnels/:id/metrics?from=<ISO8601>&to=<ISO8601>
Returns up to 5000 historical metrics points for a specific tunnel.
Request Logs
GET /tunnels/:id/requests?after=<ISO8601>&limit=<number>
Returns recent request logs (max 500 per request).