Skip to main content
MQTT Explorer uses a multi-layered testing strategy to ensure reliability and quality across frontend, backend, and platform layers.

Testing Overview

MQTT Explorer employs four types of tests:

Unit Tests

Frontend component tests and backend logic tests using Mocha + Chai

UI Tests

Automated browser tests with Playwright for end-to-end validation

LLM Tests

AI assistant proposal validation and integration tests

Integration Tests

MCP protocol tests and cross-component validation

Quick Start

Run All Tests

# Unit tests only (fast)
yarn test

# All tests including demo video
yarn test:all

Run Specific Test Suites

yarn test:app

Unit Testing

Testing Framework

Stack:
  • Mocha - Test framework
  • Chai - Assertion library
  • React Testing Library - Component testing
  • JSDOM - DOM implementation for Node.js
MQTT Explorer uses Mocha for all unit tests. It’s already established in the project with excellent async/await support and flexible test organization.

Frontend Tests (app/)

Location: app/src/**/*.spec.tsx

Test Utilities

Use the generic test utilities in app/src/utils/spec/testUtils.tsx:
import { renderWithProviders } from '../../utils/spec/testUtils'
import { expect } from 'chai'
import { describe, it } from 'mocha'

describe('MyComponent', () => {
  it('should render correctly', () => {
    const { container } = renderWithProviders(<MyComponent />)
    expect(container).to.exist
  })
})

renderWithProviders Options

const { container } = renderWithProviders(<MyComponent />, { 
  withTheme: true 
})

Mock Data Helpers

import { createMockChartData } from '../../utils/spec/testUtils'

// Create 10 data points (default)
const data = createMockChartData()

// Create specific number of points
const data = createMockChartData(50)
The test utilities automatically mock ResizeObserver for components using react-resize-detector.

Testing Patterns

1

Test Rendering

it('should render without crashing', () => {
  const { container } = renderWithProviders(<MyComponent />)
  expect(container).to.exist
})
2

Test Props

it('should accept all valid props', () => {
  const props = {
    title: 'Test',
    value: 123,
    onChange: () => {},
  }
  const { container } = renderWithProviders(<MyComponent {...props} />)
  expect(container).to.exist
})
3

Test Interactions

import { userEvent } from '../../utils/spec/testUtils'

it('should handle click events', async () => {
  const { container } = renderWithProviders(<MyComponent />)
  const button = container.querySelector('button')
  
  if (button) {
    await userEvent.click(button)
    // Assert expected behavior
  }
})
4

Test Edge Cases

it('should handle empty data', () => {
  const { container } = renderWithProviders(
    <MyComponent data={[]} />
  )
  expect(container).to.exist
})

it('should handle negative values', () => {
  const data = [{ x: 1, y: -10 }]
  const { container } = renderWithProviders(
    <MyComponent data={data} />
  )
  expect(container.querySelector('svg')).to.exist
})

Example: Complete Test Suite

See app/src/components/Chart/Chart.spec.tsx for a comprehensive example demonstrating:
  • Multiple test groups (Rendering, Data Visualization, Edge Cases)
  • Testing with different props and configurations
  • Testing SVG elements
  • Testing theme integration
  • Performance testing

Backend Tests (backend/)

Location: backend/src/spec/*.spec.ts
import { expect } from 'chai'
import { describe, it } from 'mocha'
import { Tree, TreeNode } from '../Model/Tree'

describe('Tree Model', () => {
  it('should insert message into tree', () => {
    const tree = new Tree()
    tree.updateWithMessage('home/livingroom/temp', '23.5')
    
    const node = tree.findNode('home/livingroom/temp')
    expect(node).to.exist
    expect(node.message.value).to.equal('23.5')
  })
})

Running Unit Tests

yarn test

UI Automation Tests

Location: src/spec/ui-tests.spec.ts

Overview

UI tests validate core functionality through automated browser tests using Playwright. Each test is independent and deterministic.
UI tests require a running MQTT broker. The helper script handles setup automatically.

Running UI Tests

./scripts/runUiTests.sh

Test Coverage

UI tests validate:
  • ✅ Connection wizard flow
  • ✅ Topic tree rendering
  • ✅ Message publishing
  • ✅ Search functionality
  • ✅ Chart visualization
  • ✅ Settings persistence
  • ✅ Keyboard shortcuts

Demo Video Generation

Generate documentation videos showcasing features:
yarn build
yarn test:demo-video
Requirements:
  • mosquitto MQTT broker
  • Xvfb (virtual framebuffer)
  • tmux (terminal multiplexer)
  • ffmpeg (video encoding)
Mobile demo uses Pixel 6 viewport (412x915px) to showcase responsive design. See MOBILE_COMPATIBILITY.md for mobile strategy.

Writing UI Tests

import { test, expect } from '@playwright/test'

test('should connect to broker', async ({ page }) => {
  await page.goto('http://localhost:3000')
  
  // Fill connection form
  await page.fill('[name="host"]', 'localhost')
  await page.fill('[name="port"]', '1883')
  await page.click('button[type="submit"]')
  
  // Verify connection
  await expect(page.locator('.connection-status'))
    .toHaveText('Connected')
})

LLM Testing

Location: app/src/services/spec/

Test Strategy

The AI Assistant feature includes comprehensive tests to validate proposal quality and LLM integration.

Offline Tests

Default mode - no API key needed. Validates structure and parsing logic.

Live Tests

Opt-in with API key. Tests real LLM responses and proposal quality.

Test Files

FilePurpose
llmService.spec.tsUnit tests for service methods
llmProposals.spec.tsProposal validation (structure, format)
llmIntegration.spec.tsLive LLM integration tests (opt-in)

Running LLM Tests

1

Offline Tests (Default)

yarn test:app
No API key needed. Fast execution.
2

Set API Key

export OPENAI_API_KEY=sk-your-key-here
# Or use Gemini
export GEMINI_API_KEY=your-key-here
3

Opt-in to Live Tests

export RUN_LLM_TESTS=true
4

Run Tests

yarn test:app
Runs both offline and live tests.
Use the helper script for convenience:
OPENAI_API_KEY=sk-your-key ./scripts/run-llm-tests.sh

Validation Criteria

Topic Validation

✅ Non-empty string
✅ No wildcards (+ or #)
✅ Valid segments (no empty segments)
✅ Matches system patterns (zigbee2mqtt, homeassistant, etc.)

Payload Validation

✅ Valid JSON (if JSON format)
✅ Appropriate for target system
✅ Reasonable size (< 10KB)
✅ No security risks

QoS Validation

✅ Must be 0, 1, or 2
✅ Typically 0 for home automation

Description Validation

✅ Non-empty
✅ Actionable (uses imperative verbs)
✅ Clear and concise
✅ Under 100 characters

Home Automation System Patterns

Topic: zigbee2mqtt/device_name/set
Payload: {"state": "ON"}
Actions: state, brightness, color

Environment Variables

# Required for live tests (at least one)
OPENAI_API_KEY=sk-...
GEMINI_API_KEY=AIza...
LLM_API_KEY=...  # Generic fallback

# Opt-in flag
RUN_LLM_TESTS=true

# Optional configuration
LLM_PROVIDER=openai  # or 'gemini'
LLM_NEIGHBORING_TOPICS_TOKEN_LIMIT=500

Example Test Output

For detailed test results and examples, see docs/LLM_TEST_RESULTS.md.

Integration Tests

MCP Introspection Tests

Validate Model Context Protocol integration:
yarn build
yarn test:mcp
Location: src/spec/testMcpIntrospection.js Tests:
  • ✅ MCP server discovery
  • ✅ Tool introspection
  • ✅ Resource listing
  • ✅ Prompt validation

Test Best Practices

1. Test Behavior, Not Implementation

it('should render error message when validation fails', () => {
  const { container } = renderWithProviders(
    <MyComponent value="invalid" />
  )
  expect(container.textContent).to.include('Invalid input')
})

2. Use Descriptive Test Names

// Good ✅
it('should display connection error when broker is unreachable')

// Bad ❌
it('test1')
describe('MyComponent', () => {
  describe('Rendering', () => {
    it('should render correctly')
    it('should render with props')
  })
  
  describe('Interactions', () => {
    it('should handle clicks')
    it('should handle keyboard input')
  })
  
  describe('Edge Cases', () => {
    it('should handle empty data')
    it('should handle null values')
  })
})

4. Keep Tests Independent

  • Each test should run in isolation
  • Don’t rely on test execution order
  • Clean up after each test if needed

5. Test Edge Cases

  • Empty data
  • Null/undefined values
  • Very large/small numbers
  • Negative values
  • Single item arrays
  • Special characters

6. Use Chai Assertions

expect(value).to.exist
expect(value).to.be.true
expect(value).to.equal(expected)
expect(array).to.have.length(5)
expect(number).to.be.greaterThan(0)
expect(string).to.include('substring')

CI/CD Integration

In continuous integration pipelines:
1

Run Offline Tests (Default)

- name: Run Tests
  run: yarn test
No API key needed. Fast execution.
2

Optional Live Tests (Nightly)

- name: Run LLM Integration Tests
  env:
    RUN_LLM_TESTS: true
    OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
  run: yarn test
Best practices for CI/CD:
  • Run offline tests by default (no API key needed)
  • Optionally run live tests on schedule (e.g., nightly)
  • Use secrets management for API keys
  • Monitor API costs
See CI_CD.md for complete CI/CD documentation.

Coverage Reporting

# Generate coverage report
cd app && npx nyc yarn test

# View HTML report
open coverage/index.html

Troubleshooting

”ResizeObserver is not defined”

This is automatically mocked by test utilities. Ensure you’re importing from testUtils.tsx:
import { renderWithProviders } from '../../utils/spec/testUtils'

“Window is not defined”

Make sure jsdom-global/register is imported in testUtils.tsx.

Tests Timing Out

Increase timeout for async operations:
it('should complete async operation', function() {
  this.timeout(5000) // 5 seconds
  // test code
})

LLM Tests Skip Automatically

  • Check that RUN_LLM_TESTS=true is set
  • Verify API key is in environment
  • Check console output for skip messages

API Rate Limits

  • Add delays between tests
  • Use smaller test dataset
  • Run tests less frequently

Test Performance

Typical execution times:
  • Unit tests (app): ~5-10 seconds
  • Unit tests (backend): ~2-3 seconds
  • UI tests: ~30-60 seconds
  • LLM tests (offline): ~2 seconds
  • LLM tests (live): ~30-60 seconds
Run unit tests frequently during development. Run UI tests before commits. Run LLM live tests before releases.

Next Steps

Styling

Learn Material-UI styling conventions

Architecture

Understand the codebase structure

Build docs developers (and LLMs) love