Prerequisites
Before installing simpE, ensure you have the following:Required
Python 3.14+
simpE requires Python 3.14 or higher
uv Package Manager
Fast Python package and project manager
LM-Studio
Local LLM inference engine with API support
Git
For cloning the repository
System Requirements
- Operating System: Linux, macOS, or Windows (with WSL)
- RAM: 8GB minimum (16GB+ recommended for larger models)
- Storage: 2GB for simpE + model storage space
- Network: Internet connection for initial setup
Installation Steps
Install uv Package Manager
Install uv using the official installer:Verify the installation:Expected output:
Install LM-Studio
Download and install LM-Studio from lmstudio.ai.After installation:
- Launch LM-Studio
- Download a model (e.g., Llama 3.2 1B, Qwen 2.5 3B)
- Load the model
- Enable the local API server
The default API endpoint is
http://127.0.0.1:1234/v1. You can verify it’s running by visiting http://127.0.0.1:1234 in your browser.Clone simpE Repository
Clone the simpE repository from GitHub:Navigate to the project directory:Verify the repository contents:You should see:
Install Dependencies
Use uv to install all required dependencies:This will:
- Create a virtual environment (
.venv) - Install Python 3.14+ if not already available
- Install required packages from
pyproject.toml:openai- For API communicationquestionary- For interactive CLI prompts
The
uv sync command reads from pyproject.toml:Configuration
Basic Configuration
Openmain.py in your favorite editor and configure the following parameters near the top of the file (lines 14-23):
main.py
Configuration Options Explained
llm - Model Selection
llm - Model Selection
Type:
stringDefault: "" (empty string)Leave empty to automatically use the currently loaded model in LM-Studio. This is the recommended setting.The actual model name will be captured from API responses and used for naming result files.
baseurl - API Endpoint
baseurl - API Endpoint
Type:
stringDefault: "http://127.0.0.1:1234/v1"The OpenAI-compatible API endpoint for your LLM server.reasoning_effort - Reasoning Level
reasoning_effort - Reasoning Level
Type:
stringDefault: "low"Options: "low", "medium", "high"Controls the reasoning effort for models that support explicit reasoning modes.tries - Test Count
tries - Test Count
Type:
intDefault: 100Number of tests to run per benchmark type.Total runtime =
tries × 3 benchmarks × average response timemax_tokens - Token Limit
max_tokens - Token Limit
Type:
intDefault: 512Maximum number of tokens the model can generate per response.Advanced Configuration
For advanced users, additional configuration options are available:main.py
Verification
Verify your installation is working correctly:Run a Quick Test
Modify Run the benchmark:You should see output similar to:
tries to 1 in main.py for a quick verification:Don’t forget to change
tries back to 100 (or your preferred value) after testing.Troubleshooting
Common Issues
'uv' command not found
'uv' command not found
Problem: Shell can’t find the uv command after installation.Solution: Restart your terminal or manually add uv to your PATH:Then reload:
Connection refused to LM-Studio
Connection refused to LM-Studio
Problem: API endpoint not accessible.Solutions:
- Verify LM-Studio is running
- Check that a model is loaded
- Ensure the API server is enabled (look for the server toggle in LM-Studio)
- Try accessing
http://127.0.0.1:1234in your browser - Check if another application is using port 1234
Python version mismatch
Python version mismatch
Problem: System Python is older than 3.14.Solution: uv will automatically download and use Python 3.14+. Verify with:Should output:
Permission denied creating directories
Permission denied creating directories
Problem: Can’t create Or run from a directory where you have write access.
logs/ or results/ directories.Solution: Ensure you have write permissions in the simpE directory:Import errors for dependencies
Import errors for dependencies
Problem: Missing This will reinstall all dependencies from scratch.
openai or questionary modules.Solution: Re-run the sync command:Next Steps
Quick Start Guide
Run your first benchmark suite
Understanding Benchmarks
Learn about the three benchmark types
Analysis Guide
Deep dive into result analysis
GitHub Repository
View source code and contribute
System Architecture
Understanding how simpE works:simpE is designed to work with any OpenAI-compatible API endpoint, not just LM-Studio. You can point it to other local inference engines or even remote APIs by changing the
baseurl parameter.