Overview
The workspace is implemented as a Next.js client-side application (app/confidential-ai/page.tsx) that establishes cryptographically verified connections to TEE-hosted language models through Attested TLS (aTLS).
All provider credentials and settings are stored entirely in the browser using localStorage and sessionStorage. No secrets touch the server code.
Core Features
Streaming Chat Interface
The workspace provides a real-time streaming chat experience with:- OpenAI-compatible API proxied through
/api/chat/completions - Server-Sent Events (SSE) for streaming responses
- Reasoning panel with configurable effort levels (
low,medium,high) - Cache salt for session-based prompt caching
- Markdown rendering with syntax highlighting and copy buttons
Configure Provider
Set your provider base URL and optional API key in the session dialog. Settings persist in
localStorage under the key confidential-provider-settings-v1.Wait for Attestation
The UI blocks messaging until aTLS connection is established and attestation verification succeeds.
File Upload Capabilities
The workspace supports secure file attachments with intelligent processing: Supported formats:- Plain text files (
.txt,.md,.json,.csv, etc.) - PDF documents (
.pdf) with automatic text extraction
- Maximum file size: 100 MB per file
- Files are read entirely in the browser before sending
- PDF text extraction uses
pdf.jsloaded from/pdfjs/*
Provider Settings Management
Provider configuration is managed entirely client-side: Storage locations:localStorage["confidential-provider-settings-v1"]- Base URL, model, labelsessionStorage["confidential-provider-token"]- Bearer tokens (cleared on tab close)
| Variable | Purpose |
|---|---|
NEXT_PUBLIC_VLLM_BASE_URL | Default provider base URL |
NEXT_PUBLIC_VLLM_MODEL | Default model identifier |
NEXT_PUBLIC_VLLM_PROVIDER_NAME | Friendly provider name |
/api/chat/completions proxy validates all provider URLs:
aTLS Connection Flow
The workspace uses Attested TLS to establish cryptographically verified connections to TEE-hosted models.Architecture
WebSocket Proxy Connection
Browser connects via WebSocket to the aTLS proxy (
NEXT_PUBLIC_ATLS_PROXY_URL). The proxy bridges WebSocket to TCP, forwarding raw bytes to the TEE.TLS Handshake
WASM client (
lib/atlas-client.ts) performs TLS 1.3 handshake over the proxied connection.Attestation Verification
Client fetches TDX quote from TEE and verifies attestation using Intel DCAP. On success, the
onAttestation callback receives { trusted, teeType, tcbStatus }.Connection Implementation
TheconnectAtls function in page.tsx manages the connection lifecycle:
Security Properties
Hardware-Verified Identity
TDX attestation proves the exact code running in the secure enclave through cryptographic measurements (MRTD, RTMR0-2).
Intel DCAP Verification
Attestation quotes are verified using Intel’s Data Center Attestation Primitives library, running entirely in-browser via WASM.
TLS Channel Binding
Attestation is bound to the TLS session using Exported Keying Material (EKM), preventing man-in-the-middle attacks.
Zero Server Trust
Provider credentials and attestation verification happen entirely in the browser. The Next.js server never sees secrets.
Reasoning Streams and Cache Salts
Reasoning Content
The workspace supports OpenAI-compatible reasoning streams:reasoning_effort to control the depth of reasoning (low, medium, high).
Cache Salts
Cache salts enable session-based prompt caching:Guest Throttling
Optional guest throttling limits anonymous usage: Configuration:- Anonymous visitors get one confidential session before sign-in is required
- Session state tracked via
localStorage["confidential-chat-guest-used"] - Active sessions tracked via
sessionStorage["confidential-chat-guest-active"] - Authenticated users bypass all restrictions
Guest throttling requires Supabase integration. If Supabase is unavailable, guests have unlimited access.
Configuration Reference
Environment Variables
| Variable | Required | Description |
|---|---|---|
NEXT_PUBLIC_VLLM_BASE_URL | Optional | Default provider base URL |
NEXT_PUBLIC_VLLM_MODEL | Optional | Default model identifier |
NEXT_PUBLIC_VLLM_PROVIDER_NAME | Optional | Friendly provider name |
NEXT_PUBLIC_ATLS_PROXY_URL | Yes | WebSocket proxy URL (e.g., wss://proxy.example.com) |
NEXT_PUBLIC_ATTESTATION_TEST_MODE | No | Skip real attestation (dev/test only) |
NEXT_PUBLIC_CONFIDENTIAL_ENABLE_GUEST_LIMITS | No | Enable guest session throttling |
NEXT_PUBLIC_DEFAULT_SYSTEM_PROMPT | Optional | Override default system prompt |
NEXT_PUBLIC_DEFAULT_MAX_TOKENS | Optional | Default max_tokens (default: 4098) |
NEXT_PUBLIC_DEFAULT_TEMPERATURE | Optional | Default temperature (default: 0.7) |
Storage Keys
| Key | Storage | Contents |
|---|---|---|
confidential-provider-settings-v1 | localStorage | Provider base URL, model, label |
confidential-provider-token | sessionStorage | Bearer tokens |
confidential-ai-cache-salt | localStorage | UUID for prompt caching |
confidential-chat-guest-used | localStorage | Guest usage flag |
confidential-chat-guest-active | sessionStorage | Active guest session flag |
hero-initial-message | sessionStorage | Landing page prompt handoff |
hero-uploaded-files | sessionStorage | Landing page file handoff |
Related Documentation
Attestation System
Learn about TDX attestation and DCAP verification
Authentication
Understand Supabase integration and access control
