Skip to main content
QLC+ uses Qt Test framework for unit testing. All code changes must pass the test suite before being merged.

Running Tests

Quick Start

Before submitting any pull request, you must run make check and ensure all tests pass.
# From the build directory
make check
This runs the complete test suite including:
  • Engine unit tests
  • UI unit tests (if X server available)
  • Plugin tests
  • Fixture definition validation

Manual Test Execution

The unittest.sh script orchestrates all tests (unittest.sh:1):
# Run tests for QtWidgets UI
./unittest.sh ui build/

# Run tests for QML UI
./unittest.sh qmlui build/

CMake Test Targets

make check        # Run complete test suite
make unittests    # Run unit tests only (no fixture validation)
The CMake configuration (CMakeLists.txt:92-116) defines these targets:
if(qmlui)
    add_custom_target(unittests
        COMMAND ./unittest.sh "qmlui" ${CMAKE_CURRENT_BINARY_DIR}
        WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
    )
else()
    add_custom_target(unittests
        COMMAND ./unittest.sh "ui" ${CMAKE_CURRENT_BINARY_DIR}
        WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
    )
endif()

add_custom_target(check
    DEPENDS unittests
)

Test Suite Structure

Test Organization

Tests are organized by component:
engine/test/
├── bus/                    # Bus tests
├── channelmodifier/        # Channel modifier tests
├── channelsgroup/          # Channel groups tests
├── chaser/                 # Chaser function tests
├── chaserrunner/          # Chaser runner tests
├── collection/            # Collection function tests
├── doc/                   # Document tests
├── efx/                   # EFX function tests
├── fixture/               # Fixture tests
├── function/              # Base function tests
├── rgbmatrix/             # RGB matrix tests
└── scene/                 # Scene function tests

ui/test/
├── virtualconsole/        # Virtual console tests
├── simpledesk/           # Simple desk tests
└── ...

plugins/*/test/            # Plugin-specific tests
├── artnet/test/
├── midi/test/
├── enttecwing/test/
└── velleman/test/

Test Execution Flow

The platforms/linux/unittest.sh script (platforms/linux/unittest.sh:1) coordinates test execution:
1

Fixture Validation

# Validate all .qxf files with xmllint
pushd resources/fixtures/scripts
./check
Ensures all fixture definitions are valid XML.
2

Engine Tests

# Find and run all engine tests
TESTDIR=engine/test
TESTS=$(find ${TESTDIR} -maxdepth 1 -mindepth 1 -type d)
for test in ${TESTS}; do
    pushd ${TESTDIR}/${test}
    ./test.sh
    popd
done
Each test directory contains a test.sh script.
3

UI Tests

# Run UI tests if X server available
if [ "$RUN_UI_TESTS" -eq "1" ]; then
    TESTDIR=ui/test
    # Run each UI test
fi
UI tests require a running X server or xvfb-run.
4

Plugin Tests

# Test specific plugins
pushd plugins/enttecwing/test
./test.sh
popd

pushd plugins/midi/test
./test.sh
popd

pushd plugins/artnet/test
./test.sh
popd

Continuous Integration

Headless Testing

On CI servers, tests run with xvfb-run (platforms/linux/unittest.sh:16-29):
if [ "$CURRUSER" == "runner" ] \
    || [ "$CURRUSER" == "buildbot" ] \
    || [ "$CURRUSER" == "abuild" ]; then
    
    TESTPREFIX="QT_QPA_PLATFORM=minimal xvfb-run --auto-servernum"
    RUN_UI_TESTS="1"
    SLEEPCMD="sleep 1"
fi
This enables running UI tests without a physical display.

GitHub Actions

The project uses GitHub Actions for CI (README.md:23-24):
# .github/workflows/build.yml
- Build and test on Linux, Windows, macOS
- Run complete test suite
- Generate coverage reports

Writing Tests

Test Structure

QLC+ uses Qt Test framework. Example test structure:
#include <QtTest>
#include "doc.h"
#include "fixture.h"

class FixtureTest : public QObject
{
    Q_OBJECT

private slots:
    void initTestCase();
    void cleanupTestCase();
    void init();
    void cleanup();
    
    // Test cases
    void testInitial();
    void testID();
    void testName();
    void testUniverse();
    void testAddress();
    void testChannels();
    
private:
    Doc* m_doc;
};

void FixtureTest::initTestCase()
{
    m_doc = new Doc(this);
}

void FixtureTest::cleanupTestCase()
{
    delete m_doc;
}

void FixtureTest::testID()
{
    Fixture* fxi = new Fixture(m_doc);
    fxi->setID(42);
    QCOMPARE(fxi->id(), quint32(42));
    delete fxi;
}

QTEST_MAIN(FixtureTest)
#include "fixture_test.moc"

Test Script

Each test directory needs a test.sh script:
#!/bin/bash

# Set library path
export LD_LIBRARY_PATH=../../src:../../../engine/src

# Run the test
./mytest_test

# Return test result
exit $?

CMakeLists.txt for Tests

Example test CMake configuration:
set(TEST_NAME mytest_test)

add_executable(${TEST_NAME}
    mytest_test.cpp
    ${TEST_NAME}.h
)

target_link_libraries(${TEST_NAME}
    Qt${QT_VERSION_MAJOR}::Test
    qlcplusengine
)

add_test(NAME ${TEST_NAME} COMMAND ${TEST_NAME})

Test Requirements

Resource Setup

Tests need access to resources (unittest.sh:12-23):
# Copy resources for testing
cp -r $SOURCE_DIR/resources/colorfilters $DEST_DIR/resources
cp -r $SOURCE_DIR/resources/fixtures $DEST_DIR/resources
cp -r $SOURCE_DIR/resources/gobos $DEST_DIR/resources
cp -r $SOURCE_DIR/resources/icons $DEST_DIR/resources
cp -r $SOURCE_DIR/resources/inputprofiles $DEST_DIR/resources
cp -r $SOURCE_DIR/resources/rgbscripts $DEST_DIR/resources
cp -r $SOURCE_DIR/resources/schemas $DEST_DIR/resources
The test setup script handles this automatically.

Test Data Files

Many tests use XML files for test data (unittest.sh:26-36):
# Find and copy test XML files
for file in $(find $SOURCE_DIR -name "*.xml*"); do
    dir=$(dirname ${file#./})
    mkdir -p $DEST_DIR/$dir
    cp $file $DEST_DIR/$dir/
done

Coverage Reports

Generating Coverage

Coverage reporting is available on Linux and macOS only.
make coverage
The coverage.sh script:
  1. Rebuilds with coverage flags
  2. Runs all tests
  3. Generates lcov HTML report
  4. Opens report in browser

Viewing Coverage

Coverage badge on GitHub (README.md:25-26):
[![Coverage](https://coveralls.io/repos/github/mcallegari/qlcplus/badge.svg?branch=master)]

Best Practices

Test Coverage

  • Public API methods
  • Edge cases and boundary conditions
  • Error handling
  • State transitions
  • XML loading/saving
  • Private implementation details
  • Qt framework functionality
  • Third-party libraries
  • Platform-specific code (use mocks)

Writing Good Tests

1

One concept per test

Each test method should verify one specific behavior:
void testFixtureID();      // Only tests ID getter/setter
void testFixtureName();    // Only tests name getter/setter
2

Use descriptive names

// Good
void testAddFixtureWithInvalidId();
void testRemoveFixtureEmitsSignal();

// Bad
void test1();
void testFixture();
3

Arrange-Act-Assert pattern

void testAddFixture()
{
    // Arrange
    Fixture* fxi = new Fixture(m_doc);
    
    // Act
    bool result = m_doc->addFixture(fxi);
    
    // Assert
    QCOMPARE(result, true);
    QCOMPARE(m_doc->fixturesCount(), 1);
}
4

Clean up resources

Use cleanup() or smart pointers:
void cleanup()
{
    m_doc->clearContents();
}

Qt Test Macros

QCOMPARE(actual, expected);    // Test equality
QVERIFY(condition);            // Test boolean condition
QVERIFY2(condition, message);  // With custom message

Debugging Failed Tests

Running Individual Tests

Run a specific test directly:
cd build/engine/test/fixture
export LD_LIBRARY_PATH=../../src
./fixture_test

Verbose Output

# Qt Test verbosity levels
./fixture_test -v1    # Silent
./fixture_test -v2    # Normal (default)
./fixture_test -vs    # Verbose

Running Specific Test Functions

# Run only one test function
./fixture_test testID

# Run multiple functions
./fixture_test testID testName

Debug with GDB

export LD_LIBRARY_PATH=../../src
gdb ./fixture_test
(gdb) run

Platform-Specific Considerations

Linux

  • Tests require X server or xvfb-run
  • Use QT_QPA_PLATFORM=minimal for headless
  • Install xvfb: sudo apt-get install xvfb

macOS

Tests run normally if logged in (platforms/linux/unittest.sh:31-33):
elif [[ "$OSTYPE" == "darwin"* ]]; then
    echo "We're on OSX. Any prefix needed?"
fi

Windows

Use unittest.bat instead of unittest.sh (CMakeLists.txt:105-109):
if(WIN32)
   add_custom_target(unittests
        COMMAND unittest.bat "ui" ${CMAKE_CURRENT_BINARY_DIR}
        WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
    )
endif()

Troubleshooting

Missing Resources

If tests fail with “file not found”:
# From source root
./unittest.sh ui build/
This ensures resources are copied correctly.

Library Loading Errors

Set LD_LIBRARY_PATH correctly:
export LD_LIBRARY_PATH=path/to/build/engine/src:path/to/build/ui/src

UI Tests Skipped

UI tests are skipped if no X server detected (platforms/linux/unittest.sh:38-45):
# Check for X server
XPID=$(pidof X)
if [ ${#XPID} -gt 0 ]; then
    RUN_UI_TESTS="1"
fi
Use xvfb-run for headless testing.

Test Timeout

Increase test timeout in CMakeLists.txt:
set_tests_properties(mytest_test PROPERTIES TIMEOUT 300)

Integration with Development

Pre-Commit Testing

Always run make check before committing changes.
Consider using a pre-commit hook:
#!/bin/bash
# .git/hooks/pre-commit

cd build
make check
if [ $? -ne 0 ]; then
    echo "Tests failed. Commit aborted."
    exit 1
fi

CI Integration

GitHub Actions automatically runs tests on:
  • Every push
  • Every pull request
  • Multiple platforms simultaneously

Contributing Test Code

When to Add Tests

Add tests for all new functionality:
  • Public API methods
  • State changes
  • Error conditions
Add a test that:
  1. Reproduces the bug
  2. Fails before the fix
  3. Passes after the fix
Ensure existing tests still pass. Add tests for new code paths.

Test Review Criteria

Your tests should:
  • ✓ Be independent (no dependencies between tests)
  • ✓ Be repeatable (same result every time)
  • ✓ Be fast (avoid unnecessary delays)
  • ✓ Test behavior, not implementation
  • ✓ Have clear, descriptive names
  • ✓ Clean up resources properly

Resources

Build docs developers (and LLMs) love