LiveCodes is designed for performance, but proper configuration and deployment practices can significantly improve speed and user experience.
LiveCodes performance can be optimized across multiple dimensions:
┌─────────────────────────────────────────────────────┐
│ Performance Optimization Areas │
│ │
│ 1. Build Optimization │
│ └─ Bundle size, tree shaking, compression │
│ │
│ 2. Runtime Performance │
│ └─ Compilation caching, lazy loading │
│ │
│ 3. Network Optimization │
│ └─ CDN, caching headers, compression │
│ │
│ 4. Server Optimization │
│ └─ Resource limits, horizontal scaling │
│ │
│ 5. Client Optimization │
│ └─ Code splitting, defer/async loading │
└─────────────────────────────────────────────────────┘
Build Optimization
Bundle Size Reduction
Production build :
NODE_ENV = production npm run build:app
This enables:
Minification (Terser)
Tree shaking (removes unused code)
Dead code elimination
CSS minification
Memory Allocation
Increase Node.js memory for large builds:
NODE_OPTIONS = --max-old-space-size = 8192 npm run build
Recommendation by project size :
Project Size Memory Build Time Small (less than 100 languages) 4GB ~2 min Medium (100-200) 6GB ~4 min Large (200+) 8GB+ ~6 min
Selective Language Loading
Build only needed languages:
// custom-build.config.ts
export const languages = [
'html' ,
'css' ,
'javascript' ,
'typescript' ,
'jsx' ,
'tsx' ,
'markdown' ,
// ... only languages you need
];
Reduces bundle size by ~70% for typical use cases.
Asset Optimization
Images :
# Optimize images before build
find src/livecodes/assets -name '*.png' -exec optipng -o7 {} \;
find src/livecodes/assets -name '*.svg' -exec svgo {} \;
Fonts :
Use woff2 format (best compression)
Subset fonts to required characters
Lazy load non-critical fonts
Compilation Caching
LiveCodes automatically caches compiled code:
Location : src/livecodes/compiler/create-compiler.ts
const cache : {
[ key in Language ] ?: {
content : string ;
compiled : string ;
info : string ;
processors : string ;
languageSettings : string ;
};
} = {};
const compile = async ( content , language , config , options ) => {
// Check cache
if (
! options ?. forceCompile &&
cache [ language ]?. content === content &&
cache [ language ]?. processors === enabledProcessors &&
cache [ language ]?. languageSettings === languageSettings &&
cache [ language ]?. compiled
) {
return {
code: cache [ language ]?. compiled || '' ,
info: JSON . parse ( cache [ language ]?. info || '{}' ),
};
}
// Compile and cache
const compiled = await compiler ( content , { config , language });
cache [ language ] = { content , compiled , ... };
return compiled ;
};
Cache invalidation :
Content change
Processor change
Language settings change
Manual clearCache() call
Lazy Loading Compilers
Compilers load on-demand:
const load = ( languages : Language [], config : Config ) =>
Promise . allSettled (
languages . map (( language ) => {
const languageCompiler = compilers [ language ];
if ( languageCompiler && ! languageCompiler . fn ) {
// Load compiler only when needed
return loadCompiler ( language );
}
return Promise . resolve ();
})
);
Benefits :
Faster initial load
Reduced memory usage
Parallel loading
Module Resolution Optimization
CDN Selection :
// Check CDN health and use fastest
const checkCDNs = async ( testModule , preferredCDN ) => {
const cdns = [ preferredCDN , ... modulesService . cdnLists . npm ];
for ( const cdn of cdns ) {
try {
const start = performance . now ();
const res = await fetch ( modulesService . getUrl ( testModule , cdn ), {
method: 'HEAD' ,
});
const latency = performance . now () - start ;
if ( res . ok && latency < 500 ) {
return cdn ; // Use if fast and available
}
} catch {
// Try next CDN
}
}
return modulesService . cdnLists . npm [ 0 ]; // Fallback
};
Worker Offloading
Offload heavy operations to Web Workers:
// Compile in worker for non-blocking UI
const compileInWorker = async ( code , language ) => {
const worker = new Worker ( 'compiler.worker.js' );
return new Promise (( resolve ) => {
worker . postMessage ({ code , language });
worker . onmessage = ( e ) => {
resolve ( e . data . compiled );
worker . terminate ();
};
});
};
Network Optimization
CDN Configuration
Use CDN for static assets :
# docker-compose.yml
environment :
- CDN_URL=https://cdn.example.com
Benefits :
Geographic distribution
Parallel downloads (different domains)
Browser cache across sites
Optimal caching strategy (automatically configured):
Immutable assets (/assets/*):
Cache-Control : public, max-age=31536000, s-maxage=31536000, immutable
Versioned bundles (/livecodes/*):
Cache-Control : public, max-age=31536000, s-maxage=31536000, immutable
Assets with hash (/livecodes/assets/*):
Cache-Control : public, max-age=14400, must-revalidate
HTML pages :
Cache-Control : no-cache, must-revalidate
Compression
Enable in Caddy (automatic):
Manual gzip compression :
# Pre-compress assets
find build -type f \( -name '*.js' -o -name '*.css' -o -name '*.html' \) -exec gzip -k -9 {} \;
Compression ratios :
File Type Original Gzip Brotli Savings JavaScript 100 KB 25 KB 22 KB 75-78% CSS 50 KB 10 KB 8 KB 80-84% HTML 20 KB 5 KB 4 KB 75-80%
HTTP/2 & HTTP/3
Caddy automatically enables HTTP/2:
Benefits :
Multiplexing (parallel requests)
Header compression
Server push (optional)
Reduced latency
Enable HTTP/3 (experimental):
servers {
protocol {
experimental_http3
}
}
Server Optimization
Resource Limits
Docker Compose :
services :
app :
deploy :
resources :
limits :
cpus : '4.0'
memory : 8G
reservations :
cpus : '2.0'
memory : 4G
Node.js tuning :
environment :
- NODE_OPTIONS=--max-old-space-size=8192
- UV_THREADPOOL_SIZE=16 # Increase libuv thread pool
Horizontal Scaling
Load balancer configuration :
services :
app :
deploy :
replicas : 3
update_config :
parallelism : 1
delay : 10s
rollback_config :
parallelism : 1
delay : 5s
Nginx load balancer :
upstream livecodes {
least_conn ; # Route to least busy server
server app1.example.com:443 max_fails = 3 fail_timeout=30s;
server app2.example.com:443 max_fails = 3 fail_timeout=30s;
server app3.example.com:443 max_fails = 3 fail_timeout=30s;
}
server {
listen 443 ssl http2;
server_name livecodes.example.com;
location / {
proxy_pass https://livecodes;
proxy_http_version 1.1 ;
proxy_set_header Upgrade $ http_upgrade ;
proxy_set_header Connection 'upgrade' ;
proxy_cache_bypass $ http_upgrade ;
}
}
Database Optimization (Valkey)
Persistence strategy :
# Optimize for performance
valkey-server \
--save 300 10 \ # Save if 10 changes in 5 min
--stop-writes-on-bgsave-error no \
--rdbcompression yes \
--maxmemory 2gb \
--maxmemory-policy allkeys-lru # Evict least recently used
Connection pooling :
const valkey = new Valkey ({
host: 'valkey' ,
port: 6379 ,
maxRetriesPerRequest: 3 ,
enableReadyCheck: true ,
lazyConnect: true ,
});
Broadcast Optimization
Channel cleanup :
// Automatic cleanup prevents memory leaks
const cleanupInterval = setInterval (() => {
const now = Date . now ();
const timeout = 1000 * 60 * 20 ; // 20 minutes
Object . keys ( channels ). forEach (( key ) => {
if ( now - channels [ key ]. lastAccessed > timeout ) {
delete channels [ key ];
io . in ( key ). disconnectSockets ( true );
}
});
}, 60000 ); // Run every minute
Data size limits :
// Prevent memory exhaustion
channels [ channel ] = {
channelToken ,
result: result . length < 300000 ? result : '' , // 300KB limit
data: JSON . stringify ( data ). length < 500000 ? data : {}, // 500KB limit
lastAccessed: Date . now (),
};
Client Optimization
Code Splitting
Split by route or feature:
// Dynamic import for heavy features
const loadEditor = async () => {
const { monaco } = await import ( './editor/monaco' );
return monaco ;
};
const loadFormatter = async () => {
const { prettier } = await import ( './formatter/prettier' );
return prettier ;
};
Prefetching
Preload likely-needed resources:
<!-- Prefetch commonly used compilers -->
< link rel = "prefetch" href = "https://esm.sh/[email protected] " as = "script" >
< link rel = "prefetch" href = "https://cdn.jsdelivr.net/npm/[email protected] " as = "script" >
<!-- Preconnect to CDNs -->
< link rel = "preconnect" href = "https://esm.sh" >
< link rel = "preconnect" href = "https://cdn.jsdelivr.net" >
Debouncing & Throttling
Compile on change (debounced):
let compileTimeout : number ;
const onCodeChange = ( code : string ) => {
clearTimeout ( compileTimeout );
compileTimeout = setTimeout (() => {
compile ( code , language , config );
}, 500 ); // Wait 500ms after last change
};
Auto-save (throttled):
let lastSave = 0 ;
const saveInterval = 5000 ; // 5 seconds
const autoSave = ( project : Project ) => {
const now = Date . now ();
if ( now - lastSave > saveInterval ) {
localStorage . setItem ( 'project' , JSON . stringify ( project ));
lastSave = now ;
}
};
For large console outputs:
// Only render visible items
const renderConsole = ( logs : ConsoleLog []) => {
const viewport = {
start: Math . floor ( scrollTop / itemHeight ),
end: Math . ceil (( scrollTop + viewportHeight ) / itemHeight ),
};
return logs . slice ( viewport . start , viewport . end ). map ( renderLog );
};
Monitoring & Profiling
Measure key operations :
const measurePerformance = async ( operation : string , fn : () => Promise < any >) => {
const start = performance . now ();
try {
const result = await fn ();
const duration = performance . now () - start ;
console . log ( ` ${ operation } : ${ duration . toFixed ( 2 ) } ms` );
// Send to analytics
if ( duration > 1000 ) {
logSlowOperation ( operation , duration );
}
return result ;
} catch ( error ) {
console . error ( ` ${ operation } failed:` , error );
throw error ;
}
};
// Usage
const compiled = await measurePerformance ( 'TypeScript compilation' , () =>
compiler . compile ( code , 'typescript' , config )
);
Core Web Vitals
Track user experience metrics:
import { getCLS , getFID , getFCP , getLCP , getTTFB } from 'web-vitals' ;
getCLS ( console . log ); // Cumulative Layout Shift
getFID ( console . log ); // First Input Delay
getFCP ( console . log ); // First Contentful Paint
getLCP ( console . log ); // Largest Contentful Paint
getTTFB ( console . log ); // Time to First Byte
Target values :
Metric Good Needs Improvement Poor LCP ≤2.5s 2.5s-4.0s >4.0s FID ≤100ms 100ms-300ms >300ms CLS ≤0.1 0.1-0.25 >0.25
Memory Profiling
Detect memory leaks :
let lastHeapSize = 0 ;
const checkMemory = () => {
if ( 'memory' in performance ) {
const heap = ( performance as any ). memory . usedJSHeapSize ;
const delta = heap - lastHeapSize ;
console . log ( `Heap: ${ ( heap / 1024 / 1024 ). toFixed ( 2 ) } MB ( ${ delta > 0 ? '+' : '' }${ ( delta / 1024 / 1024 ). toFixed ( 2 ) } MB)` );
lastHeapSize = heap ;
}
};
setInterval ( checkMemory , 5000 );
Benchmark Results
Compilation Speed
Typical compilation times (on MacBook Pro M1):
Language Lines Cold Cached Speedup TypeScript 1000 450ms 2ms 225x Sass 500 120ms 1ms 120x Python (Pyodide) 100 850ms 5ms 170x Markdown 1000 80ms 1ms 80x
Bundle Sizes
Build Size Gzipped Load Time (3G) Full (all languages) 2.1 MB 520 KB ~3.5s Core (10 languages) 580 KB 145 KB ~1.0s Minimal (HTML/CSS/JS) 320 KB 85 KB ~0.6s
Best Practices
For self-hosted deployments
Enable HTTP/2 and compression
Use CDN for static assets
Configure proper cache headers
Set resource limits in Docker
Monitor memory usage
Use horizontal scaling for high traffic
Enable Valkey persistence
Implement health checks
Lazy load LiveCodes SDK
Use lite mode for simple use cases
Limit active languages
Defer loading until user interaction
Preconnect to CDNs
Use loading="lazy" for playground iframes
Cache compiled results
Use Workers for heavy compilation
Lazy load compiler libraries
Minimize compiler bundle size
Implement progressive compilation for large files
Return early for cache hits
Symptoms : Long time to interactiveSolutions :
Reduce bundle size (selective language loading)
Enable code splitting
Use CDN closer to users
Preload critical resources
Check network waterfall in DevTools
Symptoms : Long delay after typingSolutions :
Increase debounce delay (less frequent compilation)
Check cache hit rate (should be >90%)
Profile compiler execution
Use faster CDN for compiler libraries
Reduce processor chain length
Symptoms : Browser slowdown, crashesSolutions :
Clear cache periodically: compiler.clearCache()
Limit console log history
Remove old event listeners
Dispose unused editors
Check for memory leaks in custom compilers
Symptoms : Slow response times, timeoutsSolutions :
Increase Node.js memory: NODE_OPTIONS=--max-old-space-size=8192
Add more replicas (horizontal scaling)
Optimize Valkey settings
Implement rate limiting
Use caching reverse proxy
Next Steps
Docker Configuration Optimize Docker deployment
Self-Hosting Production deployment guide
Services Architecture Optimize service communication
Security Model Security vs performance tradeoffs