[tiles] section configures individual tile behavior, while [layout] controls how many of each tile to run and their CPU affinity.
Layout Configuration
The[layout] section controls CPU core assignment and tile counts.
Logical CPU cores to run Firedancer tiles on. Can be specified as:
- A single core:
"0" - A range:
"0-10" - A range with stride:
"0-10/2"(useful for hyperthreading) - Floating cores:
"f5"(next 5 tiles float on original core set)
"auto", Firedancer will attempt to determine the best layout for the system. When using auto, agave_affinity must also be set to auto.Logical CPU cores the Agave subprocess and all of its threads are allowed to run on. Specified in the same format as the Firedancer affinity.Important: Do not overlap Firedancer affinity with Agave affinity, as Firedancer tiles expect exclusive use of their core.
Logical CPU cores to blocklist from being used by Firedancer when using auto affinity. Cores can be specified with an ‘h’ suffix to indicate both the core and its hyperthread sibling should be blocklisted.By default, core 0 and its hyperthread sibling are blocklisted to prevent interference with OS kernel threads.
Tile Counts
How many net tiles to run. Should be set to 1. Net tiles send and receive packets from the network device.
How many QUIC tiles to run. Should be set to 1 for current mainnet conditions. QUIC tiles parse incoming QUIC protocol messages and manage connections.
How many verify tiles to run. Verify tiles perform signature verification on incoming transactions. On modern hardware, each verify tile can handle around 20-40K transactions per second. Six tiles seems to be enough for current mainnet traffic.Verify tiles scale linearly - increase this count until the validator is not dropping incoming transactions.
How many bank tiles to run. Should be set to 4 for perf and balanced scheduling modes. Bank tiles execute transactions. Because of consensus limits (~32K transactions per block), there is typically no need to use more than 4 on mainnet.Note: Bank tiles do not scale linearly due to locks and concurrent data structures in the Agave runtime.
How many shred tiles to run. Should be set to 1 for mainnet, 2 for testnet. Shred tiles distribute block data to the network when leader, and receive/retransmit when not leader.
How many resolver tiles to run. Should be set to 1. Resolver tiles resolve address lookup tables before transactions are scheduled.
QUIC Tile Configuration
Which port to listen on for incoming regular UDP transactions that are not over QUIC. These could be votes, user transactions, or transactions forwarded from another validator.
Which port to listen on for incoming QUIC transactions. Currently this must be exactly 6 more than the regular_transaction_listen_port.
Maximum number of simultaneous QUIC connections which can be open. New connections which would exceed this limit will not be accepted. Must be >= 2 and a power of 2.
QUIC connection idle timeout. An idle connection will be terminated if it remains idle longer than this threshold.
QUIC retry is a feature to combat new connection request spamming. Determines whether this feature is enabled.
Pack Tile Configuration
The pack tile orders incoming transactions to generate the most fees per compute resource.Maximum number of transactions that will be stored before those with the lowest estimated profitability get dropped. The maximum allowed and default value is 65524. Not recommended to change.
When a transaction consumes fewer CUs than it requests, the bank and pack tiles work together to adjust block limits so a different transaction can consume the unspent CUs. This typically leads to producing blocks with more transactions.
The pack tile scheduling strategy with different tradeoffs.Options:
"perf": Fill the block as fast as possible using the highest-paying transactions. Results in consistently 100% full blocks. Use default 4 bank tiles."balanced"(default): Fill the block at a rate just fast enough to fill it by the end. Optimizes for revenue from priority fees. Use default 4 bank tiles."revenue": Fill blocks extremely lazily, saving most work for the end. Results in high MEV revenue but regularly unfull blocks. Use high number of bank tiles (10-20). WARNING: Deprecated and will be removed.
Shred Tile Configuration
The port to listen on for shreds to forward. This port is broadcast over gossip so other validators know how to reach this one.
Additional destinations to forward received shreds. Each new, valid shred that the validator receives will be forwarded to these addresses. Format:
["ip:port", ...]Additional destinations for shreds produced when leader. Each shred that the validator produces when it is leader will be sent to these addresses. Format:
["ip:port", ...]GUI Configuration
If the GUI is enabled. Firedancer has a GUI that can provide useful information about the validator.
The address to listen on. By default, the GUI is only accessible from the local machine.
The port to listen on for GUI HTTP connections.
By default the GUI listens on
127.0.0.1:80. You can configure this using gui_listen_address and gui_listen_port options.Metric Tile Configuration
The address to listen on for Prometheus metrics. By default, metrics are only accessible from the local machine.
The port to listen on for HTTP requests for Prometheus metrics. Firedancer serves metrics at a URI like
127.0.0.1:7999/metricsConfiguration Examples
Tile Pipeline
Transactions flow through Firedancer in a linear pipeline:It is suggested to run as many tiles as possible and tune the tile counts for maximum system throughput so that the Solana network can run faster. There are example tuned configurations in the
src/app/fdctl/config/ folder.