Overview
Disco implements a dataflow-oriented architecture where computational work is organized into specialized “tiles” (processing units) that communicate via Tango’s high-performance IPC primitives. This design enables horizontal scaling and efficient resource utilization.Disco tiles can run as threads in a single process or distributed across multiple processes, with the same IPC mechanisms working transparently in both configurations.
Architecture
Transaction Processing Pipeline
The Disco architecture follows a filtering and validation pipeline:Pipeline Stages
Pipeline Stages
Network Ingress (net tiles)
- Receive raw packets from NICs via AF_XDP, DPDK, or other interfaces
- QUIC protocol handling
- Initial packet parsing and routing
- Multiple tiles for horizontal scaling
Signature Verification (verify tiles)
- Ed25519 signature verification
- Transaction signature validation
- Deduplication tagging (64-bit tags)
- Parallel processing across multiple cores
Deduplication (dedup tile)
- Filter duplicate transactions
- Tag-based deduplication scheme
- Maintains recent transaction cache
- Multiplexes from verify tiles
Block Packing (pack tile)
- Transaction scheduling
- Block assembly
- Compute unit optimization
- Priority fee handling
Execution (execle tiles)
- Transaction execution
- Runtime state updates
- Parallel execution across banking stages
- Slot sequencing
Block Store (store tile)
- Persistent block storage
- Shred management
- Historical data access
Shred Generation (shred tiles)
- FEC encoding
- Merkle tree generation
- Network distribution
Core Components
Tiles & Topology
Tiles & Topology
Tile System (tiles.h)
Tiles are independent processing units with defined inputs and outputs:Topology (topo/)
- Defines tile interconnections
- Configures mcache/dcache links
- NUMA-aware placement
- Load balancing configuration
- Place tiles near their NUMA node
- Minimize cross-NUMA communication
- Deep queues for buffering (mcache/dcache)
Network Processing
Network Processing
Network Tile (net/)
Features:- Multi-NIC support
- AF_XDP zero-copy networking
- Hardware offload support (checksum, RSS)
- Flow steering to verify tiles
QUIC (quic/)
- RFC 9000/9001 compliant implementation
- TLS 1.3 encryption
- Connection multiplexing
- Flow control and congestion control
- Load balancing via connection ID steering
Netlink (netlink/)
- Linux kernel networking interface
- Route management
- Interface configuration
Transaction Processing
Transaction Processing
Signature Verification
Distributed across multiple verify tiles:- Receive transaction from net tile (mcache)
- Read transaction payload (dcache)
- Verify Ed25519 signatures (Ballet)
- Compute dedup tag
- Publish to dedup tile (mcache)
Deduplication (dedup/)
Tag-based scheme:- 64-bit tags generated during signature verification
- Recent transaction tracking
- Tag collision handling
- Configurable time window
Block Packing (pack/)
Scheduling algorithms:- Greedy packing by compute units
- Priority fee optimization
- Account lock conflict resolution
- Microblock generation
Data Management
Data Management
Shred Processing (shred/)
Shred structure:- Data shreds: Transaction payloads
- Code shreds: FEC redundancy
- Merkle proofs for verification
Block Store (store/)
- Persistent storage backend
- Shred retrieval
- Snapshot management
- Archival operations
Archiver (archiver/)
- Long-term block storage
- Compression and deduplication
- Off-chain data export
Monitoring & Debugging
Monitoring & Debugging
Metrics (metrics/)
Per-tile metrics:- Message throughput
- Processing latency
- Queue depths
- Error counts
Diagnostics (diag/)
- Live tile inspection
- Performance profiling
- Bottleneck detection
Events (events/)
- Structured event logging
- Tile lifecycle events
- Error reporting
- Performance events
Trace (trace/)
- Dataflow tracing
- Message flow visualization
- Latency analysis
PCAP (pcap/)
- Packet capture support
- Replay capabilities
- Network debugging
Signal Encoding
Disco uses the Tango signature field (sig in fd_frag_meta_t) to encode routing and filtering information:
Protocol Constants
Additional Features
Specialized Components
Specialized Components
Bundle Processing (bundle/)
- Transaction bundle handling
- Atomic bundle execution
- Bundle scheduling
Keyguard (keyguard/)
- Private key management
- Signature generation service
- Hardware security module integration
Genesis (genesis/)
- Genesis block processing
- Initial state configuration
Plugin System (plugin/)
- External plugin integration
- Custom tile implementations
GUI (gui/)
- Web-based monitoring interface
- Real-time metrics visualization
Performance Considerations
- NUMA Awareness: Place tiles and workspaces on same NUMA node
- Deep Queues: Mcache/dcache sizing for burst handling
- CPU Affinity: Pin tiles to specific cores
- Zero-Copy: Direct memory access via Tango dcache
- Horizontal Scaling: Multiple tiles of same type for throughput