Skip to main content

Overview

This guide compiles best practices derived from analyzing the actual codebase structure, test patterns, and implementation decisions in the project.

Project Architecture

The project follows a well-organized structure:
source/
├── .github/workflows/     # CI/CD automation
├── pages/                 # Page Object Models
│   ├── login.py
│   ├── cart_page.py
│   └── checkout.py
├── tests/                 # Test suites
│   ├── conftest.py       # Fixtures and configuration
│   ├── test_login.py     # Login scenarios
│   ├── test_functionalities.py  # Feature tests
│   └── test_assertions.py       # Assertion examples
├── testData/             # Test data management
│   └── users.json
├── requirements.txt      # Dependencies
└── notes.txt            # Team guidelines
This separation of concerns makes the codebase scalable and maintainable.

Page Object Model (POM)

The project implements POM consistently across all page objects.

Structure Pattern

from playwright.sync_api import Page

class PageName:
    def __init__(self, page: Page):
        self.page = page
        # 1. Define all locators in __init__
        self.element1 = page.locator('[data-test="element1"]')
        self.element2 = page.locator('#element2')
    
    def navigate(self):
        # 2. Navigation method
        self.page.goto("URL")
    
    def action_method(self):
        # 3. Action methods
        self.element1.click()
    
    def get_data(self) -> str:
        # 4. Data retrieval methods
        return self.element2.text_content()

POM Principles

Each page object represents one page or component:
  • LoginPage - handles login page interactions
  • CartPage - manages cart operations
  • CheckoutPage - checkout process
# Good - focused responsibility
class CartPage:
    def add_product(self, product_name: str):
        ...
    
    def remove_product(self, product_name: str):
        ...

# Avoid - mixed responsibilities
class CartPage:
    def login(self):  # Login is not cart's responsibility
        ...
Hide implementation details from tests:
# test_functionalities.py:6-9
def test_successful_login(page):
    login_page = LoginPage(page)
    login_page.navigate()
    login_page.login("standard_user", "secret_sauce")
    # Test doesn't know about locators
The test doesn’t care about data-test attributes or selectors.
# Pattern allows for fluent interfaces
class LoginPage:
    def login(self, username, password) -> 'LoginPage':
        self.username_input.fill(username)
        self.password_input.fill(password)
        self.login_button.click()
        return self

# Enable chaining
login_page.navigate().login("user", "pass")
# From cart_page.py:14, 23
def add_product(self, product_name: str):
    ...

def get_cart_count(self) -> int:
    if self.cart_badge.count() == 0:
        return 0
    return int(self.cart_badge.text_content())
Type hints improve IDE support and catch errors early.

Test Organization

Test File Structure

The project organizes tests by functionality:
  • test_login.py - Login-specific scenarios (valid/invalid/logout/multiple users)
  • test_functionalities.py - End-to-end user flows using page objects
  • test_assertions.py - Assertion patterns and examples
  • test_API.py - API testing (separate from UI)

Naming Conventions

# Clear, descriptive test names
def test_valid_login(page: Page):           # test_login.py:7
def test_invalid_login(page: Page):         # test_login.py:22
def test_logout_session(page: Page):        # test_login.py:37
def test_add_remove_products(page):         # test_functionalities.py:13
def test_checkout_process(page):            # test_functionalities.py:37
Test names should describe what they test, not how they test it.

Fixtures and Configuration

conftest.py Structure

The project’s conftest.py demonstrates proper fixture usage:
import json
from pathlib import Path
import pytest

@pytest.fixture(scope="session")
def users():
    """
    Loads testData/users.json and returns a dict.
    Accessible in any test as the 'users' fixture.
    """
    root = Path(__file__).parent.parent
    data_path = root / "testData" / "users.json"
    with data_path.open(encoding="utf-8") as f:
        return json.load(f)

Fixture Scope

# Session scope - created once per test session
@pytest.fixture(scope="session")
def users():
    return json.load(...)  # Loaded once, reused across all tests

# Function scope (default) - created for each test
@pytest.fixture
def cart_page(page):
    return CartPage(page)  # New instance per test
Use scope="session" for expensive operations like loading test data. Use function scope for test isolation.

Test Data Management

The project uses multiple approaches:

1. JSON Files

testData/users.json
{
  "validUser": { 
    "username": "standard_user", 
    "password": "secret_sauce" 
  },
  "invalidUser": { 
    "username": "locked_out_user", 
    "password": "secret_sauce" 
  }
}
Usage
# test_functionalities.py:59-64
def test_with_testdata(page, users):
    login_page = LoginPage(page)
    login_page.navigate()
    login_page.login(
        users["validUser"]["username"], 
        users["validUser"]["password"]
    )
    assert page.get_by_test_id("title").is_visible

2. Environment Variables

# test_functionalities.py:66-72
def test_login_with_env_vars(page, creds):
    login_page = LoginPage(page)
    login_page.navigate()
    login_page.login(creds["valid_user"], creds["pwd"])
    assert page.get_by_test_id("title").is_visible
Never commit .env files to version control. Add to .gitignore:
.env
*.env

3. Parametrization

# test_login.py:58-76
@pytest.mark.parametrize("username, password", [
    ("error_user", "secret_sauce"),
    ("performance_glitch_user", "secret_sauce"),
    ("visual_user", "secret_sauce")
])
def test_multiple_users(page: Page, username, password):
    page.goto(URL)
    username_input = page.get_by_placeholder("Username")
    username_input.fill(username)
    password_input = page.get_by_placeholder("Password")
    password_input.fill(password)
    login_button = page.locator("input#login-button")
    login_button.click()
    assert page.get_by_test_id("title").is_visible
Parametrization runs the same test with different inputs, reducing code duplication.

Assertions

The project uses two assertion styles:

Standard Python Assertions

# test_login.py:20
assert page.url == login_dashboard

# test_functionalities.py:11
assert page.get_by_test_id("title").is_visible

# test_login.py:55
assert login_button.is_visible()
from playwright.sync_api import expect

# test_functionalities.py:24
expect(cart_page.cart_badge).to_contain_text("2")

# test_functionalities.py:34
expect(cart_page.cart_badge.locator(".shopping_cart_badge")).to_be_hidden()

# test_assertions.py:14-15
expect(page.locator('[data-test="add-to-cart-sauce-labs-backpack"]')).to_have_css("color", "rgb(19, 35, 34)")
expect(page.locator('[data-test="add-to-cart-sauce-labs-backpack"]')).to_have_css("background-color", "rgb(255, 255, 255)")

Why expect() is Better

Auto-retry

expect() retries until condition is met or timeout, reducing flakiness.

Better Errors

Clear error messages showing expected vs actual values.

Web-first

Designed for async web interactions with built-in waiting.

Rich Matchers

Many matchers: to_be_visible, to_have_text, to_have_css, etc.

Handling Dynamic Elements

Approach from cart_page.py

# cart_page.py:14-21
def add_product(self, product_name: str):
    add_button = self.page.locator(f"#add-to-cart-{product_name}")
    add_button.click()

def remove_product(self, product_name: str):
    remove_button = self.page.locator(f"#remove-{product_name}")
    remove_button.click()
This pattern:
  • Uses f-strings for dynamic IDs
  • Keeps locator logic in page object
  • Makes tests readable: cart_page.add_product("sauce-labs-backpack")

Guard Against Missing Elements

# cart_page.py:23-27
def get_cart_count(self) -> int:
    if self.cart_badge.count() == 0:
        return 0
    return int(self.cart_badge.text_content())
Always check element existence before accessing properties.

Test Independence

Each test should be independent and self-contained:
# Good - each test sets up its own state
def test_add_remove_products(page):
    # Login
    login_page = LoginPage(page)
    login_page.navigate()
    login_page.login("standard_user", "secret_sauce")
    
    # Test cart operations
    cart_page = CartPage(page)
    cart_page.add_product("sauce-labs-backpack")
    # ...

def test_checkout_process(page):
    # Login again (doesn't depend on previous test)
    login_page = LoginPage(page)
    login_page.navigate()
    login_page.login("standard_user", "secret_sauce")
    # ...
Tests should never depend on execution order. Each test should set up its own preconditions.

Error Handling

Clear Error Messages

# conftest.py:29-30
if not user or not pwd:
    raise RuntimeError("Missing USERNAME/PASSWORD in environment.")
Fail fast with descriptive errors to help debugging.

Verify Negative Scenarios

# test_login.py:22-35
def test_invalid_login(page: Page):
    page.goto(URL)
    username_input = page.get_by_placeholder("Username")
    username_input.fill("standard_user")
    password_input = page.get_by_placeholder("Password")
    password_input.fill("secret")  # Wrong password
    login_button = page.locator("input#login-button")
    login_button.click()
    
    # Verify error message appears
    error_message = page.get_by_text("Epic sadface: Username and password do not match")
    assert error_message.is_visible
Always test both success and failure paths.

Dependencies Management

The project uses requirements.txt:
requirements.txt
pytest
pytest-html
pytest-playwright
playwright
python-dotenv

Installation Steps

1

Install Python packages

pip install -r requirements.txt
2

Install Playwright browsers

playwright install
# Or with system dependencies
playwright install --with-deps

Pin Versions in Production

# Development - flexible versions
pytest
playwright

# Production - pinned versions
pytest==8.0.0
playwright==1.40.0

CI/CD Best Practices

From .github/workflows/playwright.yml:

Fail Fast

# playwright.yml:59
--maxfail=1  # Stop after first failure
Saves CI time by not running remaining tests after first failure.

Artifact Collection

# playwright.yml:61-67
- name: Upload test results
  if: always()
  uses: actions/upload-artifact@v4
  with:
    name: test-results
    path: test-results/
    retention-days: 7
Always upload artifacts, even on failure, for debugging.

Concurrency Control

# playwright.yml:13-15
concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: true
Cancel old runs when new commits are pushed.

Code Quality

Import Organization

# Standard library imports first
import os
import json
from pathlib import Path

# Third-party imports
import pytest
from dotenv import load_dotenv

# Playwright imports
from playwright.sync_api import Page, expect

# Local imports
from pages.login import LoginPage
from pages.cart_page import CartPage

Docstrings

# conftest.py:8-13
@pytest.fixture(scope="session")
def users():
    """
    Loads testData/users.json and returns a dict.
    Accessible in any test as the 'users' fixture.
    """
    ...
Document fixtures and complex methods.

Type Hints

# Improves readability and catches errors
def login(self, username: str, password: str) -> None:
    ...

def get_cart_count(self) -> int:
    ...

def test_valid_login(page: Page):
    ...

Common Pitfalls to Avoid

# Bad
import time
time.sleep(5)

# Good - Playwright auto-waits
page.locator("#button").click()
expect(element).to_be_visible()
# Bad
page.locator('#username').fill("user")
page.locator('#username').clear()

# Good - use page objects
login_page.username_input.fill("user")
# Bad - test2 depends on test1
def test1_create_user():
    create_user("testuser")

def test2_use_user():
    login("testuser")  # Fails if test1 doesn't run

# Good - each test is independent
def test_create_user():
    create_user("testuser")
    cleanup("testuser")

def test_login():
    create_user("testuser")  # Set up own data
    login("testuser")
    cleanup("testuser")
# Bad - committed to repo
USERNAME = "admin"
PASSWORD = "secret123"

# Good - from environment
USERNAME = os.getenv("USERNAME")
PASSWORD = os.getenv("PASSWORD")

Testing Pyramid

The project demonstrates good test distribution:
        /\        API Tests (fast, focused)
       /  \       test_API.py
      /____\      
     /      \     Integration Tests (moderate)
    /        \    test_functionalities.py
   /          \   
  /____________\  Unit/Component Tests (many, fast)
                  test_login.py, test_assertions.py

Unit Tests

Fast, focused tests for individual components

Integration

Test multiple components working together

E2E

Full user flows from login to checkout

Performance Tips

Reuse Browser Context

# Use session-scoped fixtures for expensive setup
@pytest.fixture(scope="session")
def browser_context():
    # Created once per session
    pass

Parallel Execution

# Install pytest-xdist
pip install pytest-xdist

# Run tests in parallel
pytest tests/ -n auto

Selective Test Execution

# Run specific test file
pytest tests/test_login.py

# Run specific test
pytest tests/test_login.py::test_valid_login

# Run by marker
pytest -m smoke

Documentation

The project includes:
  • README.md - Project overview
  • notes.txt - Team guidelines (locator strategies)
  • Inline comments for complex logic
  • Docstrings for fixtures
Keep a notes.txt or TESTING.md with team conventions and decisions.

Summary Checklist

1

Structure

  • Separate page objects, tests, and test data
  • One page object per page/component
  • Logical test file organization
2

Page Objects

  • All locators in __init__
  • Action methods for interactions
  • Type hints for parameters and returns
  • No assertions in page objects
3

Tests

  • Descriptive test names
  • Independent tests (no execution order dependency)
  • Both positive and negative scenarios
  • Use expect() over standard assertions
4

Data Management

  • JSON files for test data
  • Environment variables for secrets
  • Parametrization for similar test cases
  • Never commit sensitive data
5

CI/CD

  • Automated test runs on push/PR
  • Artifact collection for debugging
  • Fail fast with --maxfail
  • Concurrency control to save resources

Next Steps

Locator Strategies

Master reliable element location techniques

CI/CD Integration

Automate your tests with GitHub Actions

Build docs developers (and LLMs) love