Skip to main content
5Stack uses Cloudflare Workers for serverless edge computing. The project includes a Backblaze B2 proxy worker for secure S3-compatible object storage access.

Overview

Cloudflare Workers provide:

Edge Computing

Run code at the edge, close to users

Low Latency

Sub-millisecond response times

Serverless

No infrastructure management required

Scalability

Automatic scaling to handle traffic

Project Structure

Workers are located in the cloudflare-workers/ directory:
cloudflare-workers/
└── backblaze-proxy/
    └── index.ts

Wrangler Configuration

The wrangler.jsonc file configures worker deployment:
wrangler.jsonc
{
  "$schema": "node_modules/wrangler/config-schema.json",
  "name": "5stack",
  "main": "./cloudflare-workers/backblaze-proxy/index.ts",
  "compatibility_date": "2025-04-29",
  "observability": {
    "enabled": true
  },
  "vars": { 
    "BUCKET_NAME": "5stack"
  }
}
name
string
Worker name displayed in Cloudflare dashboard
main
string
Entry point file for the worker code
compatibility_date
string
Cloudflare Workers runtime version date
observability.enabled
boolean
Enable observability features (logs, traces, metrics)
vars
object
Public environment variables available to the worker

Backblaze B2 Proxy Worker

The Backblaze proxy worker provides secure, authenticated access to S3-compatible object storage (Backblaze B2) for downloading CS2 demo files.

Worker Architecture

cloudflare-workers/backblaze-proxy/index.ts
import { AwsClient } from "aws4fetch";

export default {
  async fetch(
    request: Request,
    env: {
      S3_ACCESS_KEY: string;
      S3_SECRET: string;
      BUCKET_NAME: string;
      S3_ENDPOINT: string;
    },
  ) {
    // Worker logic
  },
};

Features

1

Request Validation

Only accepts GET and HEAD requests:
if (![
"GET", "HEAD"].includes(request.method)) {
  return new Response(null, {
    status: 405,
    statusText: "Method Not Allowed",
  });
}
2

Header Filtering

Filters out problematic headers before signing:
const UNSIGNABLE_HEADERS = [
  "x-forwarded-proto",
  "x-real-ip",
  "accept-encoding",
  "if-match",
  "if-modified-since",
  "if-none-match",
  "if-range",
  "if-unmodified-since",
];

function filterHeaders(headers: Headers, env: any) {
  return new Headers(
    Array.from(headers.entries()).filter(
      (pair) =>
        !(
          UNSIGNABLE_HEADERS.includes(pair[0]) ||
          pair[0].startsWith("cf-") ||
          ("ALLOWED_HEADERS" in env &&
            !env["ALLOWED_HEADERS"].includes(pair[0]))
        ),
    ),
  );
}
3

AWS Signature v4

Signs requests using AWS Signature Version 4:
const signedRequest = await new AwsClient({
  accessKeyId: env.S3_ACCESS_KEY,
  secretAccessKey: env.S3_SECRET,
  service: "s3",
}).sign(`https://${env.BUCKET_NAME}.${env.S3_ENDPOINT}/${file}`, {
  method: request.method,
  headers: filterHeaders(request.headers, env),
});
4

Response Proxying

Forwards the signed request and returns the response with download headers:
const response = await fetch(signedRequest.url, {
  method: signedRequest.method,
  headers: signedRequest.headers,
});

const headers = new Headers(response.headers);
headers.set("Content-Disposition", `attachment; filename="${file}"`);

return new Response(response.body, {
  headers,
  status: response.status,
  statusText: response.statusText,
});

Usage

Request a file from the proxy worker:
curl "https://demos.5stack.gg/?file=demos/match-123.dem"
The worker:
  1. Extracts the file parameter
  2. Signs the request with S3 credentials
  3. Fetches from Backblaze B2
  4. Returns with download headers

Environment Variables

The worker requires these secrets (not in wrangler.jsonc):
S3_ACCESS_KEY
string
required
Backblaze B2 application key ID
S3_SECRET
string
required
Backblaze B2 application key secret
S3_ENDPOINT
string
required
Backblaze B2 S3 endpoint (e.g., s3.us-west-004.backblazeb2.com)
BUCKET_NAME
string
S3 bucket name (defined in wrangler.jsonc)

Development

Prerequisites

Install Wrangler CLI:
yarn install
Wrangler is included as a dev dependency in package.json:39.

Local Development

Run the worker locally:
yarn wrangler dev
This starts a local development server with hot reload.

Testing Locally

Test the worker with curl:
curl "http://localhost:8787/?file=test.txt"

Setting Secrets Locally

Create a .dev.vars file for local secrets:
.dev.vars
S3_ACCESS_KEY=your_access_key
S3_SECRET=your_secret_key
S3_ENDPOINT=s3.us-west-004.backblazeb2.com
Never commit .dev.vars to version control. Add it to .gitignore.

Deployment

Authentication

Log in to Cloudflare:
yarn wrangler login
This opens a browser for authentication.

Setting Production Secrets

Set secrets for production:
yarn wrangler secret put S3_ACCESS_KEY
yarn wrangler secret put S3_SECRET
yarn wrangler secret put S3_ENDPOINT
You’ll be prompted to enter each value securely.

Deploying the Worker

Deploy to Cloudflare:
yarn wrangler deploy
Output:
Total Upload: XX.XX KiB / gzip: XX.XX KiB
Uploaded 5stack (X.XX sec)
Published 5stack (X.XX sec)
  https://5stack.your-subdomain.workers.dev
Current Deployment ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

Custom Domain

Add a custom route in the Cloudflare dashboard:
  1. Go to Workers & Pages > 5stack > Settings > Triggers
  2. Add a route: demos.5stack.gg/*
  3. Select your zone: 5stack.gg
Or configure in wrangler.jsonc:
{
  "routes": [
    {
      "pattern": "demos.5stack.gg/*",
      "zone_name": "5stack.gg"
    }
  ]
}

Monitoring

Observability

Observability is enabled in wrangler.jsonc:10-12:
"observability": {
  "enabled": true
}
This provides:
  • Real-time logs
  • Request traces
  • Performance metrics

Viewing Logs

Stream worker logs:
yarn wrangler tail
Filter by status:
yarn wrangler tail --status error

Cloudflare Dashboard

View metrics in the Cloudflare dashboard:
  • Requests: Total requests over time
  • Errors: Error rate and types
  • Latency: Response time percentiles
  • CPU Time: Worker execution time

Advanced Configuration

Smart Placement

Enable smart placement for optimal performance:
wrangler.jsonc
{
  "placement": { "mode": "smart" }
}
Smart placement automatically runs your worker near your backend services.

Service Bindings

Connect multiple workers:
{
  "services": [
    { "binding": "MY_SERVICE", "service": "my-service" }
  ]
}

KV Storage

Add key-value storage:
{
  "kv_namespaces": [
    { "binding": "MY_KV", "id": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" }
  ]
}
Create a KV namespace:
yarn wrangler kv:namespace create "MY_KV"

Durable Objects

Add stateful coordination:
{
  "durable_objects": {
    "bindings": [
      {
        "name": "MY_DURABLE_OBJECT",
        "class_name": "MyDurableObject",
        "script_name": "my-worker"
      }
    ]
  }
}

Dependencies

The worker uses the aws4fetch package for AWS request signing:
package.json
"dependencies": {
  "aws4fetch": "^1.0.20"
}
Cloudflare Workers type definitions:
package.json
"devDependencies": {
  "@cloudflare/workers-types": "^4.20250508.0"
}

Troubleshooting

Common causes:
  • Incorrect S3_ACCESS_KEY or S3_SECRET
  • Expired credentials
  • Wrong bucket permissions
Solutions:
  • Verify credentials in Backblaze dashboard
  • Check bucket permissions include S3 read access
  • Regenerate application keys if needed
Common causes:
  • File doesn’t exist
  • Incorrect bucket name
  • Wrong endpoint
Solutions:
  • Verify file exists in Backblaze bucket
  • Check BUCKET_NAME matches your bucket
  • Ensure S3_ENDPOINT is correct for your region
Workers have CPU time limits. Optimize by:
  • Minimizing request processing
  • Using streaming responses for large files
  • Caching results when possible
Check CPU usage in Cloudflare dashboard.
Common issues:
  • Not logged in: Run yarn wrangler login
  • Invalid configuration: Check wrangler.jsonc syntax
  • Missing dependencies: Run yarn install
View detailed errors:
yarn wrangler deploy --verbose

Security Best Practices

Follow these security guidelines for worker deployments:
  1. Never expose secrets in code
    • Use wrangler secret put for sensitive values
    • Don’t include secrets in wrangler.jsonc
  2. Validate all inputs
    • Check file parameter exists
    • Sanitize file paths to prevent directory traversal
  3. Limit allowed methods
    • Only allow necessary HTTP methods (GET, HEAD)
  4. Use CORS headers carefully
    • Restrict origins in production
    • Don’t use wildcard * for credentials
  5. Implement rate limiting
    • Use Cloudflare rate limiting rules
    • Prevent abuse and excessive costs

Cost Optimization

Cloudflare Workers pricing:
  • Free tier: 100,000 requests/day
  • Paid plan: $5/month for 10 million requests
  • Additional requests: $0.50 per million
Optimization strategies:
  1. Cache responses - Use Cloudflare cache API
  2. Minimize CPU time - Optimize code execution
  3. Use HEAD requests - Check file existence without downloading
  4. Implement compression - Reduce bandwidth usage

Next Steps

Wrangler Documentation

Official Wrangler CLI documentation

Workers Runtime

Cloudflare Workers runtime APIs

Environment Variables

Configure environment variables

Docker Deployment

Deploy the main application

Build docs developers (and LLMs) love