Skip to main content
LiveCodes is designed for performance, but proper configuration and deployment practices can significantly improve speed and user experience.

Performance Overview

LiveCodes performance can be optimized across multiple dimensions:
┌─────────────────────────────────────────────────────┐
│         Performance Optimization Areas              │
│                                                     │
│  1. Build Optimization                              │
│     └─ Bundle size, tree shaking, compression      │
│                                                     │
│  2. Runtime Performance                             │
│     └─ Compilation caching, lazy loading          │
│                                                     │
│  3. Network Optimization                            │
│     └─ CDN, caching headers, compression          │
│                                                     │
│  4. Server Optimization                             │
│     └─ Resource limits, horizontal scaling        │
│                                                     │
│  5. Client Optimization                             │
│     └─ Code splitting, defer/async loading        │
└─────────────────────────────────────────────────────┘

Build Optimization

Bundle Size Reduction

Production build:
NODE_ENV=production npm run build:app
This enables:
  • Minification (Terser)
  • Tree shaking (removes unused code)
  • Dead code elimination
  • CSS minification

Memory Allocation

Increase Node.js memory for large builds:
NODE_OPTIONS=--max-old-space-size=8192 npm run build
Recommendation by project size:
Project SizeMemoryBuild Time
Small (less than 100 languages)4GB~2 min
Medium (100-200)6GB~4 min
Large (200+)8GB+~6 min

Selective Language Loading

Build only needed languages:
// custom-build.config.ts
export const languages = [
  'html',
  'css',
  'javascript',
  'typescript',
  'jsx',
  'tsx',
  'markdown',
  // ... only languages you need
];
Reduces bundle size by ~70% for typical use cases.

Asset Optimization

Images:
# Optimize images before build
find src/livecodes/assets -name '*.png' -exec optipng -o7 {} \;
find src/livecodes/assets -name '*.svg' -exec svgo {} \;
Fonts:
  • Use woff2 format (best compression)
  • Subset fonts to required characters
  • Lazy load non-critical fonts

Runtime Performance

Compilation Caching

LiveCodes automatically caches compiled code: Location: src/livecodes/compiler/create-compiler.ts
const cache: {
  [key in Language]?: {
    content: string;
    compiled: string;
    info: string;
    processors: string;
    languageSettings: string;
  };
} = {};

const compile = async (content, language, config, options) => {
  // Check cache
  if (
    !options?.forceCompile &&
    cache[language]?.content === content &&
    cache[language]?.processors === enabledProcessors &&
    cache[language]?.languageSettings === languageSettings &&
    cache[language]?.compiled
  ) {
    return {
      code: cache[language]?.compiled || '',
      info: JSON.parse(cache[language]?.info || '{}'),
    };
  }
  
  // Compile and cache
  const compiled = await compiler(content, { config, language });
  cache[language] = { content, compiled, ... };
  
  return compiled;
};
Cache invalidation:
  • Content change
  • Processor change
  • Language settings change
  • Manual clearCache() call

Lazy Loading Compilers

Compilers load on-demand:
const load = (languages: Language[], config: Config) =>
  Promise.allSettled(
    languages.map((language) => {
      const languageCompiler = compilers[language];
      
      if (languageCompiler && !languageCompiler.fn) {
        // Load compiler only when needed
        return loadCompiler(language);
      }
      
      return Promise.resolve();
    })
  );
Benefits:
  • Faster initial load
  • Reduced memory usage
  • Parallel loading

Module Resolution Optimization

CDN Selection:
// Check CDN health and use fastest
const checkCDNs = async (testModule, preferredCDN) => {
  const cdns = [preferredCDN, ...modulesService.cdnLists.npm];
  
  for (const cdn of cdns) {
    try {
      const start = performance.now();
      const res = await fetch(modulesService.getUrl(testModule, cdn), {
        method: 'HEAD',
      });
      const latency = performance.now() - start;
      
      if (res.ok && latency < 500) {
        return cdn; // Use if fast and available
      }
    } catch {
      // Try next CDN
    }
  }
  
  return modulesService.cdnLists.npm[0]; // Fallback
};

Worker Offloading

Offload heavy operations to Web Workers:
// Compile in worker for non-blocking UI
const compileInWorker = async (code, language) => {
  const worker = new Worker('compiler.worker.js');
  
  return new Promise((resolve) => {
    worker.postMessage({ code, language });
    worker.onmessage = (e) => {
      resolve(e.data.compiled);
      worker.terminate();
    };
  });
};

Network Optimization

CDN Configuration

Use CDN for static assets:
# docker-compose.yml
environment:
  - CDN_URL=https://cdn.example.com
Benefits:
  • Geographic distribution
  • Parallel downloads (different domains)
  • Browser cache across sites

Cache Headers

Optimal caching strategy (automatically configured): Immutable assets (/assets/*):
Cache-Control: public, max-age=31536000, s-maxage=31536000, immutable
Versioned bundles (/livecodes/*):
Cache-Control: public, max-age=31536000, s-maxage=31536000, immutable
Assets with hash (/livecodes/assets/*):
Cache-Control: public, max-age=14400, must-revalidate
HTML pages:
Cache-Control: no-cache, must-revalidate

Compression

Enable in Caddy (automatic):
encode gzip zstd
Manual gzip compression:
# Pre-compress assets
find build -type f \( -name '*.js' -o -name '*.css' -o -name '*.html' \) -exec gzip -k -9 {} \;
Compression ratios:
File TypeOriginalGzipBrotliSavings
JavaScript100 KB25 KB22 KB75-78%
CSS50 KB10 KB8 KB80-84%
HTML20 KB5 KB4 KB75-80%

HTTP/2 & HTTP/3

Caddy automatically enables HTTP/2: Benefits:
  • Multiplexing (parallel requests)
  • Header compression
  • Server push (optional)
  • Reduced latency
Enable HTTP/3 (experimental):
servers {
  protocol {
    experimental_http3
  }
}

Server Optimization

Resource Limits

Docker Compose:
services:
  app:
    deploy:
      resources:
        limits:
          cpus: '4.0'
          memory: 8G
        reservations:
          cpus: '2.0'
          memory: 4G
Node.js tuning:
environment:
  - NODE_OPTIONS=--max-old-space-size=8192
  - UV_THREADPOOL_SIZE=16  # Increase libuv thread pool

Horizontal Scaling

Load balancer configuration:
services:
  app:
    deploy:
      replicas: 3
      update_config:
        parallelism: 1
        delay: 10s
      rollback_config:
        parallelism: 1
        delay: 5s
Nginx load balancer:
upstream livecodes {
    least_conn;  # Route to least busy server
    
    server app1.example.com:443 max_fails=3 fail_timeout=30s;
    server app2.example.com:443 max_fails=3 fail_timeout=30s;
    server app3.example.com:443 max_fails=3 fail_timeout=30s;
}

server {
    listen 443 ssl http2;
    server_name livecodes.example.com;
    
    location / {
        proxy_pass https://livecodes;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_cache_bypass $http_upgrade;
    }
}

Database Optimization (Valkey)

Persistence strategy:
# Optimize for performance
valkey-server \
  --save 300 10 \        # Save if 10 changes in 5 min
  --stop-writes-on-bgsave-error no \
  --rdbcompression yes \
  --maxmemory 2gb \
  --maxmemory-policy allkeys-lru  # Evict least recently used
Connection pooling:
const valkey = new Valkey({
  host: 'valkey',
  port: 6379,
  maxRetriesPerRequest: 3,
  enableReadyCheck: true,
  lazyConnect: true,
});

Broadcast Optimization

Channel cleanup:
// Automatic cleanup prevents memory leaks
const cleanupInterval = setInterval(() => {
  const now = Date.now();
  const timeout = 1000 * 60 * 20; // 20 minutes
  
  Object.keys(channels).forEach((key) => {
    if (now - channels[key].lastAccessed > timeout) {
      delete channels[key];
      io.in(key).disconnectSockets(true);
    }
  });
}, 60000); // Run every minute
Data size limits:
// Prevent memory exhaustion
channels[channel] = {
  channelToken,
  result: result.length < 300000 ? result : '',  // 300KB limit
  data: JSON.stringify(data).length < 500000 ? data : {},  // 500KB limit
  lastAccessed: Date.now(),
};

Client Optimization

Code Splitting

Split by route or feature:
// Dynamic import for heavy features
const loadEditor = async () => {
  const { monaco } = await import('./editor/monaco');
  return monaco;
};

const loadFormatter = async () => {
  const { prettier } = await import('./formatter/prettier');
  return prettier;
};

Prefetching

Preload likely-needed resources:
<!-- Prefetch commonly used compilers -->
<link rel="prefetch" href="https://esm.sh/[email protected]" as="script">
<link rel="prefetch" href="https://cdn.jsdelivr.net/npm/[email protected]" as="script">

<!-- Preconnect to CDNs -->
<link rel="preconnect" href="https://esm.sh">
<link rel="preconnect" href="https://cdn.jsdelivr.net">

Debouncing & Throttling

Compile on change (debounced):
let compileTimeout: number;

const onCodeChange = (code: string) => {
  clearTimeout(compileTimeout);
  
  compileTimeout = setTimeout(() => {
    compile(code, language, config);
  }, 500); // Wait 500ms after last change
};
Auto-save (throttled):
let lastSave = 0;
const saveInterval = 5000; // 5 seconds

const autoSave = (project: Project) => {
  const now = Date.now();
  
  if (now - lastSave > saveInterval) {
    localStorage.setItem('project', JSON.stringify(project));
    lastSave = now;
  }
};

Virtual Scrolling

For large console outputs:
// Only render visible items
const renderConsole = (logs: ConsoleLog[]) => {
  const viewport = {
    start: Math.floor(scrollTop / itemHeight),
    end: Math.ceil((scrollTop + viewportHeight) / itemHeight),
  };
  
  return logs.slice(viewport.start, viewport.end).map(renderLog);
};

Monitoring & Profiling

Performance Metrics

Measure key operations:
const measurePerformance = async (operation: string, fn: () => Promise<any>) => {
  const start = performance.now();
  
  try {
    const result = await fn();
    const duration = performance.now() - start;
    
    console.log(`${operation}: ${duration.toFixed(2)}ms`);
    
    // Send to analytics
    if (duration > 1000) {
      logSlowOperation(operation, duration);
    }
    
    return result;
  } catch (error) {
    console.error(`${operation} failed:`, error);
    throw error;
  }
};

// Usage
const compiled = await measurePerformance('TypeScript compilation', () =>
  compiler.compile(code, 'typescript', config)
);

Core Web Vitals

Track user experience metrics:
import { getCLS, getFID, getFCP, getLCP, getTTFB } from 'web-vitals';

getCLS(console.log);  // Cumulative Layout Shift
getFID(console.log);  // First Input Delay
getFCP(console.log);  // First Contentful Paint
getLCP(console.log);  // Largest Contentful Paint
getTTFB(console.log); // Time to First Byte
Target values:
MetricGoodNeeds ImprovementPoor
LCP≤2.5s2.5s-4.0s>4.0s
FID≤100ms100ms-300ms>300ms
CLS≤0.10.1-0.25>0.25

Memory Profiling

Detect memory leaks:
let lastHeapSize = 0;

const checkMemory = () => {
  if ('memory' in performance) {
    const heap = (performance as any).memory.usedJSHeapSize;
    const delta = heap - lastHeapSize;
    
    console.log(`Heap: ${(heap / 1024 / 1024).toFixed(2)} MB (${delta > 0 ? '+' : ''}${(delta / 1024 / 1024).toFixed(2)} MB)`);
    
    lastHeapSize = heap;
  }
};

setInterval(checkMemory, 5000);

Benchmark Results

Compilation Speed

Typical compilation times (on MacBook Pro M1):
LanguageLinesColdCachedSpeedup
TypeScript1000450ms2ms225x
Sass500120ms1ms120x
Python (Pyodide)100850ms5ms170x
Markdown100080ms1ms80x

Bundle Sizes

BuildSizeGzippedLoad Time (3G)
Full (all languages)2.1 MB520 KB~3.5s
Core (10 languages)580 KB145 KB~1.0s
Minimal (HTML/CSS/JS)320 KB85 KB~0.6s

Best Practices

  1. Enable HTTP/2 and compression
  2. Use CDN for static assets
  3. Configure proper cache headers
  4. Set resource limits in Docker
  5. Monitor memory usage
  6. Use horizontal scaling for high traffic
  7. Enable Valkey persistence
  8. Implement health checks
  1. Lazy load LiveCodes SDK
  2. Use lite mode for simple use cases
  3. Limit active languages
  4. Defer loading until user interaction
  5. Preconnect to CDNs
  6. Use loading="lazy" for playground iframes
  1. Cache compiled results
  2. Use Workers for heavy compilation
  3. Lazy load compiler libraries
  4. Minimize compiler bundle size
  5. Implement progressive compilation for large files
  6. Return early for cache hits

Troubleshooting Performance Issues

Symptoms: Long time to interactiveSolutions:
  1. Reduce bundle size (selective language loading)
  2. Enable code splitting
  3. Use CDN closer to users
  4. Preload critical resources
  5. Check network waterfall in DevTools
Symptoms: Long delay after typingSolutions:
  1. Increase debounce delay (less frequent compilation)
  2. Check cache hit rate (should be >90%)
  3. Profile compiler execution
  4. Use faster CDN for compiler libraries
  5. Reduce processor chain length
Symptoms: Browser slowdown, crashesSolutions:
  1. Clear cache periodically: compiler.clearCache()
  2. Limit console log history
  3. Remove old event listeners
  4. Dispose unused editors
  5. Check for memory leaks in custom compilers
Symptoms: Slow response times, timeoutsSolutions:
  1. Increase Node.js memory: NODE_OPTIONS=--max-old-space-size=8192
  2. Add more replicas (horizontal scaling)
  3. Optimize Valkey settings
  4. Implement rate limiting
  5. Use caching reverse proxy

Next Steps

Docker Configuration

Optimize Docker deployment

Self-Hosting

Production deployment guide

Services Architecture

Optimize service communication

Security Model

Security vs performance tradeoffs

Build docs developers (and LLMs) love