Overview
Test parametrization allows you to run the same test logic with different input values, reducing code duplication and increasing test coverage. This guide shows you how to use pytest’s @pytest.mark.parametrize decorator and fixture-based parametrization.
Why Parametrize Tests?
Reduce Duplication Write test logic once, run it with multiple datasets
Increase Coverage Test more scenarios without writing more code
Better Reporting Each parameter set runs as a separate test case
Easy Maintenance Update test logic in one place for all data variations
Basic Parametrization
Use @pytest.mark.parametrize to run a test with multiple parameter values:
test_login.py (Lines 58-76)
@pytest.mark.parametrize ( "username, password" ,[
( "error_user" , "secret_sauce" ),
( "performance_glitch_user" , "secret_sauce" ),
( "visual_user" , "secret_sauce" )])
def test_multiple_users ( page : Page, username , password ):
page.goto( URL )
username_input = page.get_by_placeholder( "Username" )
username_input.fill(username)
password_input = page.get_by_placeholder( "Password" )
password_input.fill(password)
login_button = page.locator( "input#login-button" )
login_button.click()
assert page.get_by_test_id( "title" ).is_visible
assert page.url == login_dashboard
This single test function runs 3 times , once for each set of username/password parameters. Each run appears as a separate test in your test results.
Parametrization Syntax
Add the decorator
Place @pytest.mark.parametrize above your test function
Specify parameter names
First argument is a string with comma-separated parameter names: "username, password"
Provide test data
Second argument is a list of tuples, each containing one set of values
Use parameters in test
Add parameters to the test function signature and use them in your test logic
Multiple Parameter Combinations
Two Parameters
Three Parameters
Single Parameter
@pytest.mark.parametrize ( "username, password" , [
( "error_user" , "secret_sauce" ),
( "performance_glitch_user" , "secret_sauce" ),
( "visual_user" , "secret_sauce" )
])
def test_multiple_users ( page , username , password ):
# Test runs 3 times
pass
@pytest.mark.parametrize ( "username, password, expected_url" , [
( "standard_user" , "secret_sauce" , "/inventory.html" ),
( "problem_user" , "secret_sauce" , "/inventory.html" ),
( "performance_glitch_user" , "secret_sauce" , "/inventory.html" )
])
def test_login_variations ( page , username , password , expected_url ):
# Test runs 3 times with different expected URLs
pass
@pytest.mark.parametrize ( "username" , [
"error_user" ,
"performance_glitch_user" ,
"visual_user"
])
def test_user_exists ( page , username ):
# Test runs 3 times
# Note: Single values don't need tuples
pass
Parametrizing with Test IDs
Add custom test IDs to make test output more readable:
@pytest.mark.parametrize (
"username, password" ,
[
( "error_user" , "secret_sauce" ),
( "performance_glitch_user" , "secret_sauce" ),
( "visual_user" , "secret_sauce" )
],
ids = [ "error-user" , "performance-user" , "visual-user" ]
)
def test_multiple_users ( page , username , password ):
# Test output will show readable names:
# test_multiple_users[error-user] PASSED
# test_multiple_users[performance-user] PASSED
# test_multiple_users[visual-user] PASSED
pass
Custom IDs make it much easier to identify which parameter set failed when reading test reports.
Using Fixtures for Test Data
Load test data from external files using pytest fixtures:
JSON File Fixture
import json
from pathlib import Path
import pytest
@pytest.fixture ( scope = "session" )
def users ():
"""
Loads testData/users.json and returns a dict.
Accessible in any test as the 'users' fixture.
"""
root = Path( __file__ ).parent.parent
data_path = root / "testData" / "users.json"
with data_path.open( encoding = "utf-8" ) as f:
return json.load(f)
{
"validUser" : { "username" : "standard_user" , "password" : "secret_sauce" },
"invalidUser" : { "username" : "locked_out_user" , "password" : "secret_sauce" }
}
Using the Fixture
test_functionalities.py (Lines 59-64)
def test_with_testdata ( page , users ):
login_page = LoginPage(page)
login_page.navigate()
login_page.login(users[ "validUser" ][ "username" ], users[ "validUser" ][ "password" ])
assert page.get_by_test_id( "title" ).is_visible
Why use fixture-based test data?
Centralized Data : All test data in one place
Easy Updates : Change data without modifying test code
Reusable : Same data across multiple tests
Type Safety : JSON, YAML, CSV - choose what fits your needs
Version Control : Track test data changes separately
Environment Variables for Test Data
Use environment variables for sensitive data like credentials:
conftest.py (Lines 19-31)
import os
from dotenv import load_dotenv
load_dotenv( dotenv_path = Path( __file__ ).parent.parent / ".env" )
@pytest.fixture ( scope = "session" )
def creds ():
"""
Provides credentials from environment variables.
Fails early with a clear message if missing.
"""
user = os.getenv( "USERNAME" )
pwd = os.getenv( "PASSWORD" )
if not user or not pwd:
raise RuntimeError ( "Missing USERNAME/PASSWORD in environment." )
return { "valid_user" : user, "pwd" : pwd}
test_functionalities.py (Lines 66-72)
def test_login_with_env_vars ( page , creds ):
login_page = LoginPage(page)
login_page.navigate()
login_page.login(creds[ "valid_user" ], creds[ "pwd" ])
assert page.get_by_test_id( "title" ).is_visible
Never commit sensitive credentials to version control! Use .env files and add them to .gitignore.
Complex Parametrization Patterns
Testing Valid and Invalid Cases
@pytest.mark.parametrize (
"username, password, should_succeed" ,
[
( "standard_user" , "secret_sauce" , True ),
( "locked_out_user" , "secret_sauce" , False ),
( "standard_user" , "wrong_password" , False ),
( "" , "" , False ),
],
ids = [ "valid-login" , "locked-user" , "wrong-password" , "empty-credentials" ]
)
def test_login_scenarios ( page , username , password , should_succeed ):
page.goto( "https://www.saucedemo.com/" )
page.get_by_placeholder( "Username" ).fill(username)
page.get_by_placeholder( "Password" ).fill(password)
page.locator( "#login-button" ).click()
if should_succeed:
assert page.get_by_test_id( "title" ).is_visible()
else :
assert page.locator( ".error-message" ).is_visible()
Combining Multiple Parametrize Decorators
@pytest.mark.parametrize ( "username" , [ "standard_user" , "problem_user" ])
@pytest.mark.parametrize ( "product" , [ "backpack" , "bike-light" , "bolt-tshirt" ])
def test_add_products ( page , username , product ):
# This creates 2 × 3 = 6 test combinations
# Each username tested with each product
pass
Multiple decorators create a cartesian product of all parameter combinations. Use carefully to avoid test explosion!
Parametrizing Fixtures
Create fixtures that run with different configurations:
@pytest.fixture ( params = [
"chromium" ,
"firefox" ,
"webkit"
])
def browser_type ( request ):
return request.param
def test_cross_browser ( page , browser_type ):
# Test runs 3 times, once per browser
print ( f "Testing on { browser_type } " )
page.goto( "https://example.com" )
assert page.title()
Conditional Parametrization
Skip certain parameter combinations based on conditions:
import pytest
import sys
@pytest.mark.parametrize (
"browser" ,
[
"chromium" ,
pytest.param(
"webkit" ,
marks = pytest.mark.skipif(
sys.platform == "linux" ,
reason = "WebKit unstable on Linux"
)
)
]
)
def test_browser_specific ( page , browser ):
pass
Best Practices
Start Simple Begin with basic parametrization before adding complexity
Use Descriptive IDs Always provide custom test IDs for clarity in reports
External Data Files Store large datasets in JSON/CSV files, not in decorators
Document Parameters Add docstrings explaining what each parameter represents
Identify repeated test logic
Find tests that differ only in input values
Extract common code
Move shared logic into a single parametrized test
Define parameter sets
List all value combinations to test
Add test IDs
Make test output readable with custom IDs
Run and verify
Ensure each parameter set runs independently
Data-Driven Testing Example
Complete example combining multiple techniques:
test_parametrized.py
conftest.py
testData/users.json
import pytest
from pages.login import LoginPage
@pytest.mark.parametrize (
"username, password, expected_result" ,
[
( "standard_user" , "secret_sauce" , "success" ),
( "locked_out_user" , "secret_sauce" , "locked" ),
( "standard_user" , "wrong" , "invalid" ),
( "" , "secret_sauce" , "invalid" ),
],
ids = [ "valid" , "locked-user" , "wrong-password" , "empty-username" ]
)
def test_login_scenarios ( page , username , password , expected_result ):
"""Test various login scenarios with different credentials."""
login_page = LoginPage(page)
login_page.navigate()
login_page.login(username, password)
if expected_result == "success" :
assert page.get_by_test_id( "title" ).is_visible()
assert "/inventory.html" in page.url
elif expected_result == "locked" :
error = page.get_by_text( "Sorry, this user has been locked out" )
assert error.is_visible()
else : # invalid
error = page.get_by_text( "do not match" )
assert error.is_visible()
Debugging Parametrized Tests
# Run specific parameter set
pytest test_login.py::test_multiple_users[error-user]
# Run all parameter sets for one test
pytest test_login.py::test_multiple_users
# Verbose output shows each parameter
pytest -v test_login.py::test_multiple_users
# See parameter values in output
pytest -v -s test_login.py::test_multiple_users
When a parametrized test fails, pytest shows which parameter set caused the failure, making debugging easier.
Next Steps
Fixtures Deep dive into pytest fixtures
Test Data Manage test data effectively
Best Practices Learn testing best practices
CI/CD Integration Run parametrized tests in CI/CD