Skip to main content

Get Started in Minutes

This guide will have you running your first Flower Engine session in under 5 minutes using our automated setup script.
Prerequisites: Make sure you have Python 3.12+, Rust & Cargo, and at least one API key ready (OpenRouter, Google Gemini, Groq, or DeepSeek).

Step 1: Clone and Setup

Clone the repository and run the automated setup script:
1

Clone the repository

git clone https://github.com/ritz541/flower-engine.git
cd flower-engine
2

Run the setup script

The setup script will:
  • Create a Python virtual environment
  • Install CPU-optimized dependencies
  • Copy example assets to assets/
  • Create a config.yaml from the template
chmod +x setup.sh
./setup.sh
The setup script installs PyTorch with CPU support to avoid 8GB CUDA downloads. This keeps setup fast and compatible with all systems.

Step 2: Configure API Keys

Open config.yaml and add your API keys. You need at least one provider configured.
# OpenRouter - Recommended for widest model support
openai_base_url: "https://openrouter.ai/api/v1"
openai_api_key: "sk-or-v1-YOUR_OPENROUTER_KEY_HERE"

default_model: "google/gemini-2.0-pro-exp-02-05:free"
supported_models:
  - "google/gemini-2.0-pro-exp-02-05:free"
  - "openai/gpt-4o-mini"
  - "anthropic/claude-3-haiku"
Never commit config.yaml to version control. It’s automatically gitignored to protect your API keys.

Step 3: Launch Flower Engine

Start the engine with the launch script:
./start.sh
The script will:
  1. Start the Python backend (FastAPI server on port 8000)
  2. Wait for the backend to be ready
  3. Launch the Rust TUI frontend
You’ll see the beautiful Flower Engine ASCII banner and enter the immersive terminal interface.
The startup script uses Ctrl+C to gracefully shut down both the backend and frontend. Always exit this way to ensure clean shutdown.

Step 4: Your First Session

Once the TUI launches, you’ll need to configure your session using slash commands:
1

Select a world

/world select example_world
This loads the cyberpunk world from assets/worlds/cyberpunk.yaml.
2

Select a character

/character select example_char
This loads the mercenary character from assets/characters/mercenary.yaml.
3

Create or load a session

/session new my_first_session
This creates a new session. All your conversation history will be saved here.
4

(Optional) Add custom rules

/rules add gritty
This applies the gritty realism rule from assets/rules/gritty.yaml.

Step 5: Start Roleplaying!

Now you’re ready to start your adventure. Simply type your actions and the AI narrator will respond:
You: I step out into the rain-soaked streets, pulling my coat tight against the cold.

Narrator: The neon lights of the city reflect in the puddles at your feet. 
A distant siren wails as you notice a shadowy figure watching you from 
across the street...

Switch Models

Use /model <model_id> to hot-swap LLM providers without restarting.

Session Management

Use /session list to see all saved sessions and /session load <name> to resume.

View History

Use /history to see your complete conversation history.

Cancel Generation

Type /cancel while the AI is responding to stop generation.

Available Commands

Here are the essential commands to get started:
CommandDescription
/model <model_id>Switch to a different LLM model
/world select <id>Load a world from assets
/character select <id>Load a character from assets
/session new <name>Create a new session
/session load <name>Resume an existing session
/session listList all available sessions
/rules add <rule_id>Activate a custom rule
/rules clearRemove all active rules
/historyView conversation history
/cancelStop the current AI generation
For a complete command reference, see Commands Overview.

Next Steps

Create Custom Worlds

Learn how to build your own worlds with rich lore and custom system prompts.

Design Characters

Create detailed character personas for different playstyles.

Understand the Architecture

Deep dive into how the split-brain system works.

Configure RAG

Optimize the vector database for better context retrieval.

Troubleshooting

Check that port 8000 is available:
lsof -i :8000
If another process is using it, kill it or change the port in start.sh:
venv/bin/python -m uvicorn engine.main:app --host 0.0.0.0 --port 8001
Ensure the backend is running:
curl http://127.0.0.1:8000/
You should see a response. If not, check the backend logs.
Verify your config.yaml has the correct API key format:
  • OpenRouter: sk-or-v1-...
  • DeepSeek: sk-...
  • Gemini: AIzaSy...
Test your keys with a simple curl:
curl https://openrouter.ai/api/v1/models \
  -H "Authorization: Bearer YOUR_KEY"
Ensure you have the latest stable Rust:
rustup update stable
If you’re on an older system, you may need to update your C compiler.

Need more help?

Check the full installation guide for detailed system requirements and advanced setup options.

Build docs developers (and LLMs) love