Skip to main content

System Requirements

Before installing Flower Engine, ensure your system meets these requirements:

Operating System

Recommended: Ubuntu 20.04+, Debian 11+, Fedora 36+, or any modern Linux distribution.All dependencies install cleanly on most distributions.

Hardware Requirements

ComponentMinimumRecommended
RAM4GB8GB+
Disk Space1GB2GB+
CPUDual-coreQuad-core+
GPUNot requiredNot required
Flower Engine uses CPU-based embeddings (all-MiniLM-L6-v2) for maximum compatibility. No GPU or CUDA drivers required!

Software Dependencies

Python

Version: 3.12 or newerCheck version:
python3 --version

Rust

Version: Latest stableCheck version:
cargo --version

Git

Required for cloningCheck version:
git --version

API Keys

You need at least one API key from these providers:
Best for: Free tier and fast responses
  1. Go to Google AI Studio
  2. Create an API key
  3. Note the free tier limits
Key format: AIzaSy...
Best for: Reasoning and complex tasks
  1. Sign up at platform.deepseek.com
  2. Generate an API key
  3. Add credits if needed
Key format: sk-...
Best for: Ultra-fast inference
  1. Sign up at console.groq.com
  2. Create an API key
  3. Check rate limits
Key format: gsk_...

Installation Methods

The fastest way to get started. Our setup script handles everything:
1

Install Python 3.12+

sudo apt update
sudo apt install python3.12 python3.12-venv python3-pip
2

Install Rust

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
Verify installation:
rustc --version
cargo --version
3

Clone and setup

git clone https://github.com/ritz541/flower-engine.git
cd flower-engine
chmod +x setup.sh
./setup.sh
The script will:
  • ✓ Check for Python 3.12+ and Rust
  • ✓ Create a virtual environment in venv/
  • ✓ Install CPU-optimized PyTorch
  • ✓ Install all Python dependencies
  • ✓ Copy assets_example/ to assets/
  • ✓ Create config.yaml from template
The setup script installs PyTorch from the CPU-only index to avoid downloading 8GB of CUDA libraries. This keeps installation fast and system requirements minimal.

Method 2: Manual Installation

For advanced users who want full control:
1

Create virtual environment

python3 -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
2

Install PyTorch (CPU)

pip install torch --index-url https://download.pytorch.org/whl/cpu
This prevents the automatic download of CUDA dependencies.
3

Install dependencies

pip install -r requirements.txt
This installs:
  • fastapi>=0.100.0 - Backend web framework
  • uvicorn>=0.23.0 - ASGI server
  • websockets>=11.0.3 - WebSocket support
  • chromadb>=0.4.10 - Vector database
  • openai>=1.3.0 - OpenAI-compatible client
  • sentence-transformers>=2.2.2 - Local embeddings
  • pyyaml>=6.0.1 - YAML parsing
  • google-genai>=0.1.0 - Gemini support
4

Build the TUI

cd tui
cargo build --release
cd ..
The compiled binary will be at tui/target/release/tui.
5

Initialize assets

cp -r assets_example assets
cp config.yaml.example config.yaml

Configuration

Basic Configuration

Edit config.yaml with your API keys and preferences:
config.yaml
# Database Storage Path (Relative to execution directory)
database_path: "./chroma_db"

# LLM Providers - Add your API keys here
openai_base_url: "https://openrouter.ai/api/v1"
openai_api_key: "sk-or-v1-YOUR_KEY_HERE"

deepseek_api_key: "sk-YOUR_KEY_HERE"

gemini_api_key: "AIzaSy..."

groq_api_key: "gsk_..."

# Default model (used on startup)
default_model: "google/gemini-2.0-pro-exp-02-05:free"

# Available models (shown in model picker)
supported_models:
  - "google/gemini-2.0-pro-exp-02-05:free"
  - "openai/gpt-4o-mini"
  - "anthropic/claude-3-haiku"
  - "deepseek-chat"
  - "deepseek-reasoner"
Security: config.yaml is gitignored by default. Never commit this file to version control as it contains your API keys.

Advanced Configuration

Change the ChromaDB storage path:
database_path: "/path/to/your/chroma_db"
Useful for:
  • Storing data on a different drive
  • Sharing databases across installations
  • Backing up to cloud storage
You can switch between different OpenRouter keys for different rate limits:
openai_api_key: "sk-or-v1-free-tier-key"
# openai_api_key: "sk-or-v1-paid-tier-key"
Comment/uncomment as needed.
Add any OpenAI-compatible model:
supported_models:
  - "openai/gpt-4o"
  - "anthropic/claude-3-5-sonnet"
  - "meta-llama/llama-3.1-405b-instruct"
  - "google/gemini-2.0-flash-exp:free"
The engine will fetch pricing and availability at startup.

Asset Structure

Flower Engine uses YAML files to define worlds, characters, and rules. The default structure:
assets/
├── worlds/          # World definitions with lore and scenes
│   └── cyberpunk.yaml
├── characters/      # Character personas and backgrounds
│   └── mercenary.yaml
└── rules/           # Custom narrative rules
    └── gritty.yaml

Example World File

assets/worlds/cyberpunk.yaml
id: "example_world"
name: "Example Cyberpunk City"
start_message: "The neon lights flicker as you step out into the rain-slicked alleyway. Welcome to the future."
lore: "A high-tech, low-life metropolis ruled by rival corporations."

Example Character File

assets/characters/mercenary.yaml
id: "example_char"
name: "Mercenary"
persona: "A battle-hardened veteran looking for their next paycheck."

Example Rules File

assets/rules/gritty.yaml
id: "gritty"
name: "Gritty Realism"
prompt: "The world is dark, dangerous, and unforgiving. Actions have heavy consequences."

Learn more

See our complete guide to creating custom worlds, characters, and rules.

Running Flower Engine

Using the Launch Script

The simplest way to start:
./start.sh
This script:
  1. Clears the terminal
  2. Displays the Flower Engine ASCII banner
  3. Starts the FastAPI backend on port 8000
  4. Waits for backend initialization (with pretty loading animation)
  5. Launches the Rust TUI
  6. Handles graceful shutdown on Ctrl+C

Manual Launch (for development)

Run backend and frontend separately:
source venv/bin/activate
python -m uvicorn engine.main:app --host 0.0.0.0 --port 8000 --reload
Running them separately is useful for development. The backend --reload flag will auto-restart on code changes.

Verifying Installation

Test each component individually:
1

Test backend

curl http://127.0.0.1:8000/
You should see a JSON response from FastAPI.
2

Check embeddings

source venv/bin/activate
python -c "from sentence_transformers import SentenceTransformer; \
           model = SentenceTransformer('all-MiniLM-L6-v2'); \
           print('Embeddings working!')"
The first run will download the model (~80MB).
3

Test ChromaDB

source venv/bin/activate
python -c "import chromadb; \
           client = chromadb.Client(); \
           print('ChromaDB working!')"
4

Test TUI build

cd tui
cargo check
Should complete without errors.

Troubleshooting

If python3 --version shows < 3.12:Ubuntu/Debian:
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install python3.12 python3.12-venv
macOS:
brew install [email protected]
Then use python3.12 explicitly in commands.
After installing Rust, you need to reload your shell:
source $HOME/.cargo/env
Or restart your terminal. Verify with:
cargo --version
Find and kill the conflicting process:
lsof -i :8000
kill -9 <PID>
Or change the port in start.sh and tui/src/main.rs.
ChromaDB sometimes has issues with system dependencies:Linux:
sudo apt install build-essential
macOS:
xcode-select --install
Then reinstall:
pip uninstall chromadb
pip install chromadb
The embedding model downloads from Hugging Face. If blocked:
  1. Download manually from huggingface.co/sentence-transformers/all-MiniLM-L6-v2
  2. Place in ~/.cache/huggingface/hub/
  3. Or set custom cache:
export TRANSFORMERS_CACHE=/path/to/cache
Ensure the backend is fully started before the TUI connects:
  1. Start backend manually:
venv/bin/python -m uvicorn engine.main:app --host 0.0.0.0 --port 8000
  1. Wait for “Application startup complete”
  2. In another terminal, start TUI:
cd tui && cargo run
If the TUI feels sluggish:
  1. Use release builds:
cd tui
cargo build --release
./target/release/tui
  1. Check backend logs for slow API responses
  2. Consider switching to a faster model like gemini-3-flash

Uninstallation

To completely remove Flower Engine:
# Remove the installation
rm -rf flower-engine/

# Remove the virtual environment
rm -rf venv/

# (Optional) Remove ChromaDB data
rm -rf chroma_db/

Next Steps

Quick Start Guide

Get your first session running in 5 minutes.

Understanding Architecture

Learn how the split-brain system works.

Configuration Guide

Deep dive into all configuration options.

Creating Content

Build custom worlds, characters, and rules.

Build docs developers (and LLMs) love