Skip to main content

Quickstart

This guide will walk you through creating a working Byzantine fault-tolerant consensus network using Tashi Vertex.

Prerequisites

Before you begin, make sure you have:
  • Rust 1.70 or higher installed
  • Completed the installation steps
  • Added tokio to your dependencies for async runtime support
Cargo.toml
[dependencies]
tashi-vertex = "0.1.0"
tokio = { version = "1.49", features = ["full"] }

Generate keypairs

1

Create a keypair generator

First, create a simple program to generate Ed25519 keypairs for your nodes:
examples/key-generate.rs
use tashi_vertex::KeySecret;

fn main() {
    // Generate a new secret key to use for this node when signing transactions
    let secret = KeySecret::generate();
    let public = secret.public();

    println!("Secret: {}", secret);
    println!("Public: {}", public);
}
2

Generate keys for your nodes

Run the generator multiple times to create keys for each node in your network. For a 3-node network, run it 3 times:
cargo run --example key-generate
Save each secret and public key pair — you’ll need them to configure your network.
Store secret keys securely. In production, use environment variables or a secrets manager.

Build a consensus network

1

Set up the project

Create a new binary that will run a consensus node:
examples/pingback.rs
use std::str::from_utf8;
use tashi_vertex::{
    Context, Engine, KeySecret, Message, Options, Peers, Socket, Transaction,
};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // We'll build this step by step
    Ok(())
}
2

Configure the network peers

Define the peers in your consensus network. Each peer needs an address and public key:
// Parse the secret key for this node
let key: KeySecret = "YOUR_SECRET_KEY".parse()?;

// Initialize a set of peers for the network
let mut peers = Peers::new()?;

// Add other nodes in the network
peers.insert(
    "127.0.0.1:9001",
    &"PEER_1_PUBLIC_KEY".parse()?,
    Default::default()
)?;

peers.insert(
    "127.0.0.1:9002",
    &"PEER_2_PUBLIC_KEY".parse()?,
    Default::default()
)?;

// Add yourself to the peer set
peers.insert(
    "127.0.0.1:9000",
    &key.public(),
    Default::default()
)?;

println!(" :: Configured network for {} peers", 3);
The network can tolerate up to f = ⌊(n-1)/3⌋ Byzantine participants. A 3-node network can handle 0 Byzantine nodes, while a 4-node network can handle 1.
3

Initialize the runtime and bind a socket

Create a context for async operations and bind a network socket:
// Initialize a new Tashi Vertex context
// Manages async operations and resources
let context = Context::new()?;
println!(" :: Initialized runtime");

// Bind a socket to listen for incoming connections
let socket = Socket::bind(&context, "127.0.0.1:9000").await?;
println!(" :: Bound local socket");
4

Configure and start the consensus engine

Set up engine options and start the consensus process:
// Configure execution options for the engine
let mut options = Options::default();
options.set_report_gossip_events(true);
options.set_fallen_behind_kick_s(10);

// Start the engine and begin participating in the network
let engine = Engine::start(&context, socket, options, &key, peers)?;
println!(" :: Started the consensus engine");
Options::default() provides sensible defaults for most use cases. You can tune 15+ parameters for heartbeat intervals, latency thresholds, epoch sizing, and more.
5

Send a transaction

Submit data to the network for consensus ordering:
// Helper function to send a string as a transaction
fn send_transaction_cstr(engine: &Engine, s: &str) -> tashi_vertex::Result<()> {
    let mut transaction = Transaction::allocate(s.len() + 1);
    transaction[..s.len()].copy_from_slice(s.as_bytes());
    transaction[s.len()] = 0; // null-terminate
    engine.send_transaction(transaction)
}

// Send an initial PING transaction
send_transaction_cstr(&engine, "PING")?;
6

Receive consensus-ordered messages

Process messages from the consensus layer:
// Start receiving messages
while let Some(message) = engine.recv_message().await? {
    match message {
        Message::Event(event) => {
            if event.transaction_count() > 0 {
                println!(" > Received EVENT");
                println!("    - From: {}", event.creator());
                println!("    - Created: {}", event.created_at());
                println!("    - Consensus: {}", event.consensus_at());
                println!("    - Transactions: {}", event.transaction_count());
                
                // Process each transaction
                for tx in event.transactions() {
                    let tx_str = from_utf8(&tx)?;
                    println!("    - >> {}", tx_str);
                }
            }
        }
        
        Message::SyncPoint(_) => {
            println!(" > Received SYNC POINT");
        }
    }
}

Run the network

1

Start multiple nodes

Open three separate terminals and run a node in each, using different keys and ports:
cargo run --example pingback -- \
  -B 127.0.0.1:9000 \
  -K <secret_key_1> \
  -P <public_key_2>@127.0.0.1:9001 \
  -P <public_key_3>@127.0.0.1:9002
2

Observe consensus

Once all three nodes are running, they will reach consensus and output ordered events:
 :: Configured network for 3 peers
 :: Initialized runtime
 :: Bound local socket
 :: Started the consensus engine
 > Received SYNC POINT
 > Received EVENT
    - From: aSq9DsNNvGhY...
    - Created: 1770174202473826258
    - Consensus: 1770174208954261963
    - Transactions: 1
    - >> PING
 > Received EVENT
    - From: bTr8EtOOxHjZ...
    - Created: 1770174208954261966
    - Consensus: 1770174208954261964
    - Transactions: 1
    - >> PING
 > Received EVENT
    - From: cUs7FuPPyIkA...
    - Created: 1770174216230094057
    - Consensus: 1770174208954261965
    - Transactions: 1
    - >> PING
Notice that all nodes receive events in the same consensus order, even though they were created at different times. This is Byzantine fault-tolerant consensus in action!

Understanding the flow

  1. Context initialization — Sets up the async runtime for network operations
  2. Socket binding — Opens a network socket to communicate with peers
  3. Peer configuration — Defines all participants in the consensus network
  4. Engine start — Begins the consensus protocol with your configuration
  5. Transaction submission — Data you want to order through consensus
  6. Message processing — Receive consensus-ordered events and sync points

Message types

The engine returns two types of messages:

Event

A finalized event containing consensus-ordered transactions. Events include metadata like creator, creation time, consensus timestamp, and the ordered transactions.

SyncPoint

Session management decisions from the consensus layer, used for network coordination and state synchronization.

Next steps

Now that you have a working consensus network, explore these topics:
  • Learn about configuring the consensus engine with Options
  • Understand how to handle Byzantine failures
  • Explore advanced topics like state sharing and epoch sizing
  • Deploy your network across multiple machines
This quickstart uses localhost addresses for simplicity. In production, configure proper network addresses and secure your secret keys.

Build docs developers (and LLMs) love