Skip to main content
Esprit uses pytest as its testing framework with comprehensive coverage reporting and async support.

Running Tests

Basic Test Execution

Run all tests with verbose output:
make test
Or using Poetry directly:
poetry run pytest -v

Coverage Reports

Run tests with coverage analysis:
make test-cov
This generates:
  • Terminal coverage summary
  • HTML coverage report in htmlcov/
  • XML coverage report for CI/CD
View the HTML report:
open htmlcov/index.html  # macOS
xdg-open htmlcov/index.html  # Linux

Running Specific Tests

# Run a specific test file
poetry run pytest tests/agents/test_base_agent.py -v

# Run a specific test function
poetry run pytest tests/agents/test_base_agent.py::test_agent_initialization -v

# Run tests matching a pattern
poetry run pytest -k "agent" -v

# Run tests with specific markers
poetry run pytest -m "slow" -v

Test Configuration

Test configuration is defined in pyproject.toml:
[tool.pytest.ini_options]
minversion = "6.0"
addopts = [
    "--strict-markers",
    "--strict-config",
    "--cov=esprit",
    "--cov-report=term-missing",
    "--cov-report=html",
    "--cov-report=xml",
]
testpaths = ["tests"]
python_files = ["test_*.py", "*_test.py"]
python_functions = ["test_*"]
python_classes = ["Test*"]
asyncio_mode = "auto"

Key Features

  • Strict mode: Catches configuration and marker errors
  • Auto coverage: Coverage runs automatically with all tests
  • Async support: asyncio_mode = "auto" enables automatic async test detection
  • Multiple formats: Terminal, HTML, and XML coverage reports

Coverage Configuration

Coverage settings in pyproject.toml:
[tool.coverage.run]
source = ["esprit"]
omit = [
    "*/tests/*",
    "*/migrations/*",
    "*/__pycache__/*"
]

[tool.coverage.report]
exclude_lines = [
    "pragma: no cover",
    "def __repr__",
    "if self.debug:",
    "if settings.DEBUG",
    "raise AssertionError",
    "raise NotImplementedError",
    "if 0:",
    "if __name__ == .__main__.:",
    "class .*\\bProtocol\\):",
    "@(abc\\.)?abstractmethod",
]

Excluded from Coverage

  • Test files themselves
  • Debug code blocks
  • Abstract methods and protocols
  • if __name__ == "__main__" blocks
  • NotImplementedError raises

Test Structure

Directory Layout

tests/
├── __init__.py
├── conftest.py              # Shared fixtures and configuration
├── agents/
│   ├── __init__.py
│   └── test_base_agent.py
├── config/
│   ├── __init__.py
│   └── test_config_launchpad_theme.py
├── gui/
│   ├── __init__.py
│   ├── test_server.py
│   └── test_tracer_bridge.py
└── interface/
    └── __init__.py

Test File Naming

Tests must follow these patterns:
  • Files: test_*.py or *_test.py
  • Functions: test_*
  • Classes: Test*

Writing Tests

Basic Test Example

import pytest
from esprit.agents.base_agent import BaseAgent

def test_agent_initialization():
    """Test that agent initializes with correct defaults."""
    agent = BaseAgent(name="test_agent")
    assert agent.name == "test_agent"
    assert agent.is_active is True

Async Test Example

import pytest
from esprit.agents.scan_agent import ScanAgent

@pytest.mark.asyncio
async def test_agent_scan_execution():
    """Test async scan execution."""
    agent = ScanAgent(target="https://example.com")
    result = await agent.execute_scan()
    assert result.status == "completed"

Using Fixtures

import pytest
from esprit.config import Config

@pytest.fixture
def mock_config():
    """Provide a mock configuration for testing."""
    return Config(
        llm_model="test/model",
        api_key="test-key",
    )

def test_with_fixture(mock_config):
    """Test using the mock config fixture."""
    assert mock_config.llm_model == "test/model"

Mocking with pytest-mock

import pytest
from esprit.tools.browser import BrowserTool

def test_browser_navigation(mocker):
    """Test browser navigation with mocked HTTP."""
    mock_response = mocker.Mock()
    mock_response.status_code = 200
    mocker.patch('requests.get', return_value=mock_response)
    
    browser = BrowserTool()
    result = browser.navigate("https://example.com")
    assert result.success is True

Parametrized Tests

import pytest

@pytest.mark.parametrize("input_url,expected_valid", [
    ("https://example.com", True),
    ("http://localhost:8080", True),
    ("invalid-url", False),
    ("", False),
])
def test_url_validation(input_url, expected_valid):
    """Test URL validation with multiple inputs."""
    from esprit.utils.validators import is_valid_url
    assert is_valid_url(input_url) == expected_valid

Available Test Dependencies

Development dependencies for testing (from pyproject.toml):
  • pytest ^8.4.0 - Core testing framework
  • pytest-asyncio ^1.0.0 - Async test support
  • pytest-cov ^6.1.1 - Coverage plugin
  • pytest-mock ^3.14.1 - Mocking utilities

Code Quality Checks

Run All Quality Checks

make check-all
This runs:
  1. Formatting (ruff format)
  2. Linting (ruff + pylint)
  3. Type checking (mypy + pyright)
  4. Security (bandit)

Individual Quality Commands

1

Format code

make format
Formats code using Ruff (100-character line length).
2

Lint code

make lint
Runs Ruff linter with auto-fix and Pylint for additional checks.
3

Type check

make type-check
Runs both mypy and pyright type checkers in strict mode.
4

Security check

make security
Runs Bandit security linter to detect common security issues.

Pre-commit Hooks

Pre-commit hooks run automatically before each commit:
# Run hooks manually on all files
make pre-commit

# Or using poetry directly
poetry run pre-commit run --all-files
Hooks are installed during setup:
poetry run pre-commit install

Continuous Integration

Tests run automatically in CI/CD pipelines:
  • ✅ All tests must pass
  • ✅ Code coverage must meet threshold
  • ✅ Type checking must pass (mypy + pyright)
  • ✅ Linting must pass (ruff + pylint)
  • ✅ Security checks must pass (bandit)

Testing Best Practices

Test names should clearly describe what is being tested:
# Good
def test_scan_agent_handles_invalid_url_gracefully():
    pass

# Bad
def test_agent():
    pass
Define reusable fixtures in conftest.py:
# tests/conftest.py
@pytest.fixture
def sample_scan_config():
    return {
        "target": "https://example.com",
        "mode": "quick",
        "timeout": 30,
    }
Don’t just test the happy path:
def test_agent_handles_network_timeout():
    with pytest.raises(TimeoutError):
        agent.connect(timeout=0.001)
Each test should be independent:
# Use setup/teardown or fixtures
@pytest.fixture
def clean_database():
    db = Database()
    yield db
    db.clear()  # Cleanup after test
Mock HTTP calls, file I/O, and external services:
def test_api_integration(mocker):
    mocker.patch('requests.post', return_value=mock_response)
    # Test code here

Cleanup

Remove test artifacts and cache files:
make clean
This removes:
  • __pycache__ directories
  • .pytest_cache
  • .mypy_cache
  • .ruff_cache
  • htmlcov/ (coverage reports)
  • .coverage files
  • *.pyc files

Next Steps

Development Setup

Set up your development environment

Contributing

Learn how to contribute to Esprit

Build docs developers (and LLMs) love