Test Environment
Tests must be executed inside the Docker development environment. Running tests directly on the host machine will fail due to missing dependencies and OpenSearch Dashboards infrastructure.Prerequisites
Before running tests:- Development environment configured (see Setup)
- Docker development environment running (see Building)
- Plugin dependencies installed
Starting the Test Environment
Start the Docker development environment:Accessing the Test Container
Attach a shell to the OpenSearch Dashboards container:Running Unit Tests (Jest)
Jest is the primary testing framework for unit tests. Each plugin contains its own test suite.Main Plugin Tests
From inside the container:Wazuh Core Tests
Wazuh Check Updates Tests
Test Output
Jest executes all.test.ts and .test.tsx files in the plugin directory and displays:
- Test results (pass/fail)
- Code coverage information
- Execution time
- Failed assertion details
Running Specific Tests
Run tests matching a pattern:Watch Mode
Run tests in watch mode for development:Updating Snapshots
When component output changes intentionally, update snapshots:Test Coverage
Generate code coverage reports:- Statements: Percentage of executed statements
- Branches: Percentage of executed conditional branches
- Functions: Percentage of called functions
- Lines: Percentage of executed lines
coverage/ directory. Open coverage/lcov-report/index.html in a browser for detailed visualization.
Test Types
Unit Tests
Test individual functions, components, and modules in isolation. Location:*.test.ts, *.test.tsx files throughout the codebase
Framework: Jest
Examples:
- Component rendering tests
- Utility function tests
- Redux reducer tests
- API client tests
Integration Tests
Test interactions between multiple components or modules. Location: Same as unit tests, but testing component integration Framework: Jest with additional mockingEnd-to-End Tests
Full application tests using real browser automation. Location:plugins/main/test/cypress/
Framework: Cypress
Cypress tests require additional setup and infrastructure not included in the standard Docker development environment. These tests are typically run in CI/CD pipelines.
Writing Tests
Test File Structure
Follow this structure for test files:Component Testing
Test React components using React Testing Library:Mocking
Mock external dependencies:Snapshot Testing
Capture component output:Test Scripts Reference
Available test scripts inpackage.json:
| Script | Description |
|---|---|
test:jest | Run Jest unit tests |
test:jest:runner | Run tests with custom runner |
test:server | Run server-side tests |
test:browser | Run browser-based tests |
test:ui:runner | Run UI functional tests |
Continuous Integration
The repository includes GitHub Actions workflows for automated testing:- Pull Request Checks: Run on every pull request
- Branch Protection: Tests must pass before merging
- Coverage Reports: Uploaded to coverage tracking services
.github/workflows/ directory.
Common Test Warnings
Some warnings during test execution are expected and do not indicate failures:- Browserslist warnings: “caniuse-lite is outdated” - informational only
- Prop validation warnings: May occur in test environments
- Console messages: Some tests intentionally trigger console output
Troubleshooting
Tests Fail on Host Machine
Problem: Tests fail when run directly on host machine Solution: Tests must run inside the Docker container. Access the container first:Missing setup_node_env
Problem: Error about missingsetup_node_env
Solution: This script is provided by OpenSearch Dashboards. Run tests inside the Docker container where OpenSearch Dashboards is installed.
Out of Memory Errors
Problem: Tests fail with heap out of memory errors Solution: Increase Node.js memory:Snapshot Mismatches After Updates
Problem: Snapshot tests fail after dependencies update Solution: Review changes, then update snapshots if intentional:Port Already in Use
Problem: Cannot start test environment due to port conflict Solution: Stop other containers or processes using port 5601:Best Practices
Test Naming
- Use descriptive test names that explain what is being tested
- Follow pattern: “should [expected behavior] when [condition]”
- Group related tests using
describeblocks
Test Independence
- Each test should be independent and isolated
- Avoid test order dependencies
- Clean up after tests in
afterEachhooks
Mocking Strategy
- Mock external dependencies (API calls, file system, etc.)
- Keep mocks simple and focused
- Place shared mocks in
__mocks__/directories
Coverage Goals
- Aim for high coverage on critical paths
- Focus on business logic and user interactions
- Don’t chase 100% coverage at expense of test quality
Next Steps
- Review contributing guidelines
- Explore test files in the repository for examples
- Set up CI/CD integration for your fork