Scribe uses pytest for all testing with comprehensive markers for categorization. Tests are organized alongside source code with clear separation between unit and integration tests.
CRITICAL: Always run tests from within the virtual environment. Running with system Python will fail.
# Activate virtual environment FIRSTsource venv/bin/activate# Run all testspytest# Run with verbose outputpytest -v# Run with output capture disabled (see print statements)pytest -s# Run with coverage reportpytest --cov=pipeline --cov=api --cov-report=html
# Run only unit tests (fast)pytest -m unit# Run only integration testspytest -m integration# Skip slow testspytest -m "not slow"# Run unit tests that are not slowpytest -m "unit and not slow"
import pytestfrom pipeline.models.core import PipelineDatafrom uuid import uuid4@pytest.fixturedef research_template(): """Email template requiring research papers (RESEARCH type)""" return """ Dear {{name}}, I came across your groundbreaking work on {{research_area}}. Would love to discuss {{specific_application}}. Best regards, {{my_name}} """.strip()@pytest.fixturedef pipeline_data(research_template): """Create PipelineData instance for testing""" return PipelineData( task_id=str(uuid4()), user_id=str(uuid4()), email_template=research_template, recipient_name="Dr. Jane Smith", recipient_interest="neural networks" )@pytest.mark.unit@pytest.mark.asyncioasync def test_with_fixture(pipeline_data): """Test using fixtures""" assert pipeline_data.recipient_name == "Dr. Jane Smith" assert len(pipeline_data.email_template) > 0
source venv/bin/activatewhich pytest # Should be venv/bin/pytest
2
Run Tests
# All testspytest# Specific filepytest pipeline/steps/template_parser/tests/test_template_parser.py# Specific functionpytest pipeline/steps/template_parser/tests/test_template_parser.py::test_research_template_parsing# Directorypytest pipeline/steps/template_parser/tests/
# Show test names and statuspytest -v# Show print statements and logspytest -s# Show local variables on failurepytest -v --showlocals# Combine flagspytest -v -s --showlocals
The project uses a comprehensive pytest configuration:
[pytest]# Test discovery pathstestpaths = pipeline api scripts# Python files/directories to search for testspython_files = test_*.py *_test.pypython_classes = Test*python_functions = test_*# Async configurationasyncio_mode = autoasyncio_default_fixture_loop_scope = function# Output and reportingaddopts = -ra # Show summary of all test outcomes -v # Verbose output --showlocals # Show local variables in tracebacks --strict-markers # Fail on unknown markers --capture=no # Better async support --disable-warnings # Reduce noise# Markersmarkers = slow: marks tests as slow (deselect with '-m "not slow"') integration: marks tests requiring external services unit: marks tests as unit tests (fast, no external dependencies) asyncio: marks async tests
# Show local variables on failurepytest -v --showlocals# Drop into debugger on failurepytest --pdb# Show full diff for assertion failurespytest -vv# Only run failed tests from last runpytest --lf# Run failed tests first, then otherspytest --ff