Overview
Flower Engine currently uses a simple test client for WebSocket functionality testing. There is no formal pytest setup - testing focuses on end-to-end validation of the WebSocket protocol and LLM streaming.Test Client
The test client (engine/test_client.py) is a WebSocket client that validates the complete message flow.
Test Client Location
What It Tests
The test client verifies:- Connection handshake - Initial connection to WebSocket endpoint
- State synchronization -
sync_statemessage reception - Prompt submission - Sending user messages
- Thinking indicator - Backend processing acknowledgment
- Streaming response -
chat_chunkmessages with content - Stream completion -
chat_endmessage with token counts
Running the Test Client
Prerequisites
The backend must be running before executing the test:Execute Test
Expected Output
Successful test output looks like:Test Client Implementation
Fromengine/test_client.py:
Message Flow
The test validates this exact sequence:Creating Custom Tests
Testing Specific Commands
You can modify the test client to test specific commands:Testing Error Handling
Testing Stream Cancellation
Manual Testing
Testing the Full System
The most comprehensive test is running the full system:-
Initial State
- Verify header shows “Connecting…”
- Verify connection completes and shows “Synced”
-
World Selection
- Type
/to open commands popup - Select “world select”
- Choose a world
- Verify header updates with world name
- Type
-
Character Selection
- Type
/character select - Choose a character
- Verify header updates
- Type
-
Session Management
- Type
/session new - Verify session ID appears
- Type
/session continueto test session switching
- Type
-
Chat Functionality
- Type a message and press Enter
- Verify spinner appears
- Verify streaming response
- Verify response completes
-
Model Switching
- Type
/modelto see available models - Select a different model
- Verify header updates
- Send a test message
- Type
-
Cancellation
- Start a long generation
- Press ESC during generation
- Verify stream stops
-
Rules System
- Type
/rules addand select a rule - Type
/rules clearto remove rules
- Type
Testing with Different LLM Providers
Test each provider configured inconfig.yaml:
- Connection works
- Streaming is smooth
- Token counts are accurate
- Pricing information updates
Debugging Test Failures
Backend Logs
Check backend logs for errors:WebSocket Connection Issues
If the test client can’t connect:Message Format Errors
If messages aren’t parsing correctly:Stream Not Completing
Ifchat_end never arrives:
- Check LLM API key is valid
- Check network connectivity
- Check for API rate limits
- Review backend logs for exceptions
Performance Testing
Measuring Tokens Per Second
Modify the test client to measure performance:Load Testing
Test multiple concurrent connections:Future Testing Plans
Planned improvements to the testing infrastructure:- Unit tests with pytest for individual components
- Integration tests for database operations
- Mock LLM responses for faster testing
- Automated CI/CD pipeline with GitHub Actions
- Code coverage reporting
- TUI automated testing with terminal replay
Next Steps
- Return to the contributing overview
- Learn about Python backend development
- Explore Rust TUI development