Skip to main content
Deploy YugabyteDB manually on virtual machines or bare metal servers for maximum control and customization of your database infrastructure.

Prerequisites

System Requirements

Operating System:
  • Ubuntu 18.04+, CentOS 7+, RHEL 7+, or Amazon Linux 2
  • AlmaLinux 8+ (recommended)
Hardware (per node):
  • CPU: 4+ cores (8-16 cores recommended for production)
  • RAM: 8+ GB (32-64 GB recommended for production)
  • Disk: 100+ GB SSD with 3000+ IOPS
  • Network: 1+ Gbps
Software:
  • Python 3.11+
  • OpenSSL
  • Network Time Protocol (NTP) configured

Download YugabyteDB

# Download the latest version
wget https://downloads.yugabyte.com/releases/2.25.0.0/yugabyte-2.25.0.0-b180-linux-x86_64.tar.gz

# Extract the archive
tar xvfz yugabyte-2.25.0.0-b180-linux-x86_64.tar.gz

cd yugabyte-2.25.0.0/

# Run post-install script
./bin/post_install.sh

Deploy a Single-Node Cluster

Ideal for development and testing.
1

Start YugabyteDB

./bin/yugabyted start
Output:
Starting yugabyted...
✅ System checks
✅ Creating cluster
✅ Starting master
✅ Starting tserver
✅ Waiting for cluster to be ready

YugabyteDB started successfully!

UI: http://127.0.0.1:7000
YSQL: postgresql://127.0.0.1:5433
YCQL: 127.0.0.1:9042
2

Check status

./bin/yugabyted status
3

Connect to the database

# YSQL (PostgreSQL-compatible)
./bin/ysqlsh

# YCQL (Cassandra-compatible)
./bin/ycqlsh

Deploy a Multi-Node Cluster

Production-ready deployment across multiple servers.

Architecture Overview

For a 3-node cluster with RF=3:
  • Each node runs both YB-Master and YB-TServer
  • Data is replicated across all 3 nodes
  • Cluster survives 1 node failure

Network Configuration

Ensure the following ports are accessible between nodes:
PortProtocolServicePurpose
7100TCPYB-Master RPCMaster-to-master communication
7000TCPYB-Master UIAdmin interface
9100TCPYB-TServer RPCServer-to-server communication
9000TCPYB-TServer UIAdmin interface
5433TCPYSQLPostgreSQL client connections
9042TCPYCQLCassandra client connections
6379TCPYEDISRedis client connections

Single-Zone Deployment

1

Start the first node (Node 1)

# On node 1 (e.g., 192.168.1.11)
./bin/yugabyted start \
  --advertise_address=192.168.1.11 \
  --cloud_location=aws.us-east-1.us-east-1a \
  --fault_tolerance=zone
Key Flags:
  • --advertise_address: IP address for other nodes to connect
  • --cloud_location: Placement info (cloud.region.zone)
  • --fault_tolerance: Fault tolerance level (zone, region, cloud)
2

Start the second node (Node 2)

# On node 2 (e.g., 192.168.1.12)
./bin/yugabyted start \
  --advertise_address=192.168.1.12 \
  --join=192.168.1.11 \
  --cloud_location=aws.us-east-1.us-east-1a \
  --fault_tolerance=zone
Key Flags:
  • --join: IP address of an existing node to join
3

Start the third node (Node 3)

# On node 3 (e.g., 192.168.1.13)  
./bin/yugabyted start \
  --advertise_address=192.168.1.13 \
  --join=192.168.1.11 \
  --cloud_location=aws.us-east-1.us-east-1a \
  --fault_tolerance=zone
4

Verify cluster status

On any node:
./bin/yugabyted status
Check the cluster configuration:
./bin/yb-admin -master_addresses 192.168.1.11:7100,192.168.1.12:7100,192.168.1.13:7100 \
  list_all_masters

Multi-Zone Deployment

Deploy across multiple availability zones for zone-level fault tolerance.
1

Start node in Zone A

# Node 1 in us-east-1a
./bin/yugabyted start \
  --advertise_address=192.168.1.11 \
  --cloud_location=aws.us-east-1.us-east-1a \
  --fault_tolerance=zone
2

Start node in Zone B

# Node 2 in us-east-1b
./bin/yugabyted start \
  --advertise_address=192.168.2.12 \
  --join=192.168.1.11 \
  --cloud_location=aws.us-east-1.us-east-1b \
  --fault_tolerance=zone
3

Start node in Zone C

# Node 3 in us-east-1c
./bin/yugabyted start \
  --advertise_address=192.168.3.13 \
  --join=192.168.1.11 \
  --cloud_location=aws.us-east-1.us-east-1c \
  --fault_tolerance=zone

Secure Deployment

Enable TLS encryption and authentication.
1

Start first node with security enabled

./bin/yugabyted start \
  --secure \
  --advertise_address=192.168.1.11 \
  --cloud_location=aws.us-east-1.us-east-1a \
  --fault_tolerance=zone
This generates server certificates in $HOME/var/certs/.
2

Generate certificates for additional nodes

./bin/yugabyted cert generate_server_certs \
  --hostnames=192.168.1.12,192.168.1.13
Certificates are generated in $HOME/var/generated_certs/.
3

Copy certificates to other nodes

# Copy to node 2
scp -r $HOME/var/generated_certs/192.168.1.12/* \
  [email protected]:$HOME/var/certs/

# Copy to node 3
scp -r $HOME/var/generated_certs/192.168.1.13/* \
  [email protected]:$HOME/var/certs/
4

Start additional nodes with security

# Node 2
./bin/yugabyted start \
  --secure \
  --advertise_address=192.168.1.12 \
  --join=192.168.1.11 \
  --cloud_location=aws.us-east-1.us-east-1b \
  --fault_tolerance=zone

# Node 3
./bin/yugabyted start \
  --secure \
  --advertise_address=192.168.1.13 \
  --join=192.168.1.11 \
  --cloud_location=aws.us-east-1.us-east-1c \
  --fault_tolerance=zone

Advanced Configuration

Enable Backup/Restore

Install YB Controller for backup operations:
# Extract YBC from share directory
cd yugabyte-2.25.0.0/
mkdir ybc
tar -xvf share/ybc-*.tar.gz -C ybc --strip-components=1

# Start with backup daemon
./bin/yugabyted start \
  --backup_daemon=true \
  --advertise_address=192.168.1.11

Configure Memory Settings

Customize memory allocation:
./bin/yugabyted start \
  --advertise_address=192.168.1.11 \
  --tserver_flags="memory_limit_hard_bytes=34359738368"

Enable Connection Manager

Use YSQL Connection Manager for connection pooling:
./bin/yugabyted start \
  --advertise_address=192.168.1.11 \
  --tserver_flags="enable_ysql_conn_mgr=true"

Custom Data Directories

Specify custom data locations:
./bin/yugabyted start \
  --advertise_address=192.168.1.11 \
  --base_dir=/data/yugabyte

Increase Verbosity for Debugging

./bin/yugabyted start \
  --advertise_address=192.168.1.11 \
  --master_flags="v=2" \
  --tserver_flags="v=2"

Cluster Management

Check Cluster Status

./bin/yugabyted status

Stop a Node

./bin/yugabyted stop

Destroy a Node

This deletes all data on the node.
./bin/yugabyted destroy

Collect Logs

./bin/yugabyted collect_logs
Logs are archived to: $HOME/var/logs/yugabyted_logs_<timestamp>.tar.gz

System Configuration

Set System Limits

Edit /etc/security/limits.conf:
yugabyte soft nofile 1048576
yugabyte hard nofile 1048576
yugabyte soft nproc 12000
yugabyte hard nproc 12000
yugabyte soft core unlimited
yugabyte hard core unlimited

Configure NTP

# Install chrony
sudo yum install chrony -y

# Start and enable
sudo systemctl start chronyd
sudo systemctl enable chronyd

# Verify sync
chronyc tracking

Disable Transparent Huge Pages

echo never | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
echo never | sudo tee /sys/kernel/mm/transparent_hugepage/defrag
Make permanent in /etc/rc.local.

Monitoring

Access admin UIs:
  • YB-Master UI: http://<node-ip>:7000
  • YB-TServer UI: http://<node-ip>:9000
Key metrics to monitor:
  • CPU and memory usage
  • Disk I/O and space
  • Network throughput
  • RPC latency
  • Tablet distribution

Troubleshooting

View Logs

# Master logs
tail -f $HOME/var/logs/yb-master.INFO

# TServer logs
tail -f $HOME/var/logs/yb-tserver.INFO

Check Process Status

ps aux | grep yb-master
ps aux | grep yb-tserver

Test Connectivity

# Check master RPC port
telnet <node-ip> 7100

# Check tserver RPC port  
telnet <node-ip> 9100

Next Steps

Multi-Region Deployment

Deploy across geographic regions

Configure Security

Set up authentication and encryption

Build docs developers (and LLMs) love