Overview
Continuous Integration ensures your tests run automatically on every code change. This project uses GitHub Actions to run Playwright tests on push and pull requests.
GitHub Actions Workflow
The project includes a production-ready GitHub Actions workflow at .github/workflows/playwright.yml.
Workflow Configuration
.github/workflows/playwright.yml
name : CI - Run login test
on :
push :
branches : [ "main" ]
pull_request :
branches : [ "main" ]
workflow_dispatch :
permissions :
contents : read
concurrency :
group : ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress : true
jobs :
tests :
runs-on : ubuntu-latest
steps :
- name : Checkout
uses : actions/checkout@v4
- name : Set up Python
uses : actions/setup-python@v5
with :
python-version : "3.11"
- name : Install dependencies
run : |
python -m pip install --upgrade pip
if [ -f requirements.txt ]; then
python -m pip install -r requirements.txt
fi
- name : Install Playwright browsers
run : |
python -m playwright install --with-deps
- name : RUN TESTS
run : |
mkdir -p test-results
python -m pytest tests/test_login.py \
--junitxml=test-results/junit.xml \
--html=test-results/report.html --self-contained-html \
--maxfail=1 -q --disable-warnings
- name : Upload test results
if : always()
uses : actions/upload-artifact@v4
with :
name : test-results
path : test-results/
retention-days : 7
Key Features Explained
concurrency :
group : ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress : true
This configuration automatically cancels previous workflow runs when new commits are pushed, saving CI minutes and providing faster feedback.
Conditional Browser Installation
- name : Install Playwright browsers
if : ${{ always() }}
run : |
if python -c "import importlib.util; print(importlib.util.find_spec('playwright') is not None)"; then
python -m playwright install --with-deps
fi
The workflow checks if Playwright is installed before attempting to install browsers, preventing unnecessary failures.
- name : Upload test results
if : always()
uses : actions/upload-artifact@v4
with :
name : test-results
path : test-results/
retention-days : 7
Test results are uploaded as artifacts regardless of test outcome, allowing you to review HTML reports and JUnit XML files.
Setting Up Your CI Pipeline
Create the workflow file
Copy the workflow file to .github/workflows/playwright.yml in your repository.
Configure Python version
The workflow uses Python 3.11. Update the python-version if you need a different version: - name : Set up Python
uses : actions/setup-python@v5
with :
python-version : "3.11" # Change to your preferred version
Customize test execution
Modify the pytest command to run your specific tests: # Run all tests
python -m pytest tests/
# Run specific test file
python -m pytest tests/test_login.py
# Run with parallel execution
python -m pytest tests/ -n auto
Push and verify
Commit the workflow file and push to trigger the CI pipeline. Check the Actions tab in GitHub to view results.
The --with-deps flag installs system dependencies required for browsers on Linux. This is essential for CI environments.
Pytest Reporting Options
The workflow generates multiple report formats:
JUnit XML
python -m pytest tests/ --junitxml=test-results/junit.xml
Used for CI integration and test result parsing by GitHub Actions.
HTML Reports
python -m pytest tests/ \
--html=test-results/report.html \
--self-contained-html
Generates a self-contained HTML report with test details, screenshots, and traces.
Custom Options
python -m pytest tests/ \
--maxfail=1 # Stop after first failure
-q # Quiet output
--disable-warnings # Hide warnings
-v # Verbose output
Advanced CI Configurations
Matrix Testing
Run tests across multiple Python versions and browsers:
jobs :
tests :
runs-on : ubuntu-latest
strategy :
matrix :
python-version : [ "3.10" , "3.11" , "3.12" ]
browser : [ "chromium" , "firefox" , "webkit" ]
steps :
- name : Set up Python
uses : actions/setup-python@v5
with :
python-version : ${{ matrix.python-version }}
- name : Run tests
run : |
python -m pytest tests/ --browser=${{ matrix.browser }}
Parallel Test Execution
Speed up test execution with pytest-xdist:
- name : Install dependencies
run : |
pip install pytest-xdist
pip install -r requirements.txt
- name : Run tests in parallel
run : |
python -m pytest tests/ -n auto
Environment Secrets
For tests requiring credentials (see conftest.py:22-30):
Add secrets to GitHub
Go to Settings → Secrets and variables → Actions → New repository secret Add:
Reference in workflow
- name : RUN TESTS
env :
USERNAME : ${{ secrets.USERNAME }}
PASSWORD : ${{ secrets.PASSWORD }}
run : |
python -m pytest tests/
Never commit .env files or credentials to version control. Always use CI secrets for sensitive data.
Viewing Test Results
In GitHub Actions
Navigate to the Actions tab in your repository
Click on a workflow run
View the test summary in the run details
Download artifacts to access HTML reports
Downloading Artifacts
# Using GitHub CLI
gh run download < run-i d > -n test-results
# Or download from the Actions UI
Troubleshooting
Browser installation fails
Ensure you’re using the --with-deps flag: python -m playwright install --with-deps
This installs required system dependencies on Linux.
Increase timeout values in pytest.ini or use the --timeout flag: python -m pytest tests/ --timeout=300
Use pytest-retry or implement custom retry logic: pip install pytest-rerunfailures
python -m pytest tests/ --reruns 3
Best Practices
Fast Feedback Keep test suites fast. Run critical tests first with --maxfail=1 to fail fast.
Artifact Retention Set appropriate retention days (default: 7) to balance storage costs and debugging needs.
Workflow Triggers Use workflow_dispatch to allow manual workflow runs for debugging.
Concurrency Cancel in-progress runs to save CI minutes and get faster feedback on new commits.
Next Steps
Best Practices Learn testing best practices and patterns
Locator Strategies Master reliable element location techniques