Skip to main content

Overview

Testing is a critical part of contributing to Intent Architect Modules. All contributions must include appropriate test coverage to ensure quality, prevent regressions, and validate functionality across different scenarios.

Testing Philosophy

Test Coverage

All new features and bug fixes must include corresponding tests that validate the expected behavior.

Integration Tests

Test modules in realistic scenarios using the Intent Architect test solution to ensure proper integration.

Automated Validation

Tests run automatically in the CI/CD pipeline to catch issues before they reach production.

Test Maintenance

Keep tests up-to-date as code evolves. Failing tests must be fixed or updated, not disabled.

Test Structure

The repository contains two primary testing locations:

Tests Directory Structure

Tests/
├── Intent.Modules.Tests.isln         # Intent Architect test solution
├── Intent.Modules.Tests.sln          # .NET test solution
├── Accelerators/                     # Full application test projects
│   ├── Accelerators.Api/
│   ├── Accelerators.Application/
│   ├── Accelerators.Domain/
│   ├── Accelerators.Infrastructure/
│   └── Accelerators.sln
└── ModuleBuilders/                   # Module builder test projects
    └── ModuleBuilders.csproj

Test Project Types

Unit tests within individual module projects test specific functionality in isolation.
// Example: Module-level unit test
[TestFixture]
public class TemplateBuilderTests
{
    [Test]
    public void GenerateTemplate_WithValidModel_ReturnsExpectedOutput()
    {
        // Arrange
        var builder = new TemplateBuilder();
        var model = CreateTestModel();
        
        // Act
        var result = builder.GenerateTemplate(model);
        
        // Assert
        Assert.That(result, Is.Not.Null);
        Assert.That(result.Content, Contains.Substring("expected"));
    }
}

Writing Tests

Test Cases in Intent Architect Solution

The primary testing approach uses the Intent Architect test solution:
1

Open the test solution

Open the Intent Architect test solution:
Tests/Intent.Modules.Tests.isln
This solution contains test applications that reference your modules.
2

Add test scenarios

Create or update test applications to exercise your module’s functionality:
  • Add models that trigger your templates
  • Configure module settings to test different scenarios
  • Add stereotypes and metadata to validate behaviors
3

Run Software Factory

Execute the Software Factory to generate code:
intent-cli ensure-no-outstanding-changes -- "$INTENT_USER" "$INTENT_PASS" "Tests/Intent.Modules.Tests.isln"
This validates that:
  • Your module generates expected code
  • No outstanding changes exist (generated code matches expectations)
  • Module integrates properly with dependencies
4

Verify generated output

Check that generated code:
  • Compiles successfully
  • Follows expected patterns
  • Integrates with other generated code
  • Meets functional requirements

C# Unit Tests

For module-specific logic, write C# unit tests:
using Intent.Modules.Common.CSharp;
using NUnit.Framework;

namespace Intent.Modules.Common.Tests
{
    [TestFixture]
    public class CSharpTypeResolverTests
    {
        private CSharpTypeResolver _resolver;
        
        [SetUp]
        public void Setup()
        {
            _resolver = new CSharpTypeResolver();
        }
        
        [Test]
        [TestCase("string", "System.String")]
        [TestCase("int", "System.Int32")]
        [TestCase("bool", "System.Boolean")]
        public void ResolveType_BuiltInTypes_ReturnsCorrectSystemType(
            string input, 
            string expected)
        {
            // Act
            var result = _resolver.ResolveType(input);
            
            // Assert
            Assert.That(result, Is.EqualTo(expected));
        }
        
        [Test]
        public void ResolveType_NullInput_ThrowsArgumentNullException()
        {
            // Act & Assert
            Assert.Throws<ArgumentNullException>(
                () => _resolver.ResolveType(null));
        }
    }
}

Test Naming Conventions

Follow consistent naming patterns:
// Pattern: MethodName_Scenario_ExpectedBehavior

[Test]
public void GetTemplate_WhenModelIsNull_ThrowsArgumentNullException() { }

[Test]
public void GenerateCode_WithValidModel_ReturnsFormattedCSharpCode() { }

[Test]
public void ApplyStereotype_WhenStereotypeExists_UpdatesModelProperties() { }

Running Tests

Running All Tests Locally

1

Run pre-commit checks

Execute the comprehensive pre-commit validation:
./run-pre-commit-checks.ps1 `
  -ModulesIsln "Modules/Intent.Modules.isln" `
  -TestsIsln "Tests/Intent.Modules.Tests.isln"
This runs:
  1. Pre-build validations
  2. Software Factory change detection
  3. Module builds
  4. Test solution validation
  5. All unit and integration tests
2

Run .NET tests

Execute all C# unit tests:
dotnet test
Or with detailed output:
dotnet test --logger "console;verbosity=detailed"
3

Run specific test projects

Test individual projects:
dotnet test Tests/Accelerators/Accelerators.sln
dotnet test Tests/ModuleBuilders/ModuleBuilders.csproj

Pipeline Test Execution

The Azure DevOps pipeline (azure-pipelines.yml) automatically runs tests:
# Tests run with these settings:
- task: DotNetCoreCLI@2
  displayName: 'dotnet test'
  inputs:
    command: 'test'
    publishTestResults: false
    projects: $(targetsToBuild)
    arguments: >
      --results-directory $(Agent.TempDirectory)/TestResults
      --logger trx
      --blame-crash
      --blame-hang
      --blame-hang-timeout:10m
      --no-build
      --filter Requirement!=CosmosDB
Tests tagged with Requirement=CosmosDB are excluded from CI runs as they require external infrastructure.

Test Result Publishing

Test results are automatically published:
  • Console output during CI/CD execution
  • Test result files (.trx) for detailed analysis
  • Pipeline artifacts on failure for debugging

Test Coverage Requirements

What to Test

Verify templates generate correct code:
  • Output matches expected format
  • Code compiles successfully
  • Proper handling of edge cases
  • Template dependencies resolve correctly
Test module interactions:
  • Dependencies load correctly
  • Modules don’t conflict
  • Shared resources accessed properly
  • Stereotypes and metadata propagate
Validate core functionality:
  • Algorithms produce correct results
  • Validation rules work as expected
  • Error conditions handled properly
  • Edge cases covered
Test module settings:
  • Default values applied
  • Custom configurations respected
  • Invalid settings rejected
  • Settings persist correctly

Coverage Guidelines

  • New Features: 80%+ code coverage required
  • Bug Fixes: Add test that reproduces the bug, then fix
  • Critical Paths: 100% coverage for critical functionality
  • Public APIs: All public methods should have tests

Testing Best Practices

AAA Pattern

Structure tests using Arrange-Act-Assert:
[Test]
public void ProcessTemplate_ValidInput_GeneratesExpectedOutput()
{
    // Arrange - Set up test data and dependencies
    var template = CreateTestTemplate();
    var processor = new TemplateProcessor();
    
    // Act - Execute the operation being tested
    var result = processor.ProcessTemplate(template);
    
    // Assert - Verify the results
    Assert.That(result, Is.Not.Null);
    Assert.That(result.Success, Is.True);
    Assert.That(result.Output, Contains.Substring("expected"));
}

Test Isolation

// Each test should be independent
[TestFixture]
public class TemplateTests
{
    private TemplateBuilder _builder;
    
    [SetUp]
    public void Setup()
    {
        // Create fresh instances for each test
        _builder = new TemplateBuilder();
    }
    
    [TearDown]
    public void TearDown()
    {
        // Clean up resources
        _builder?.Dispose();
    }
}

Descriptive Assertions

// Good: Clear assertion with message
Assert.That(result.Status, Is.EqualTo(Status.Success),
    $"Expected status Success but got {result.Status}");

// Better: Multiple specific assertions
Assert.Multiple(() =>
{
    Assert.That(result.Status, Is.EqualTo(Status.Success));
    Assert.That(result.Errors, Is.Empty);
    Assert.That(result.Output, Is.Not.Null);
});

Test Data Builders

// Create reusable test data builders
public class TestModelBuilder
{
    private string _name = "DefaultName";
    private string _type = "DefaultType";
    
    public TestModelBuilder WithName(string name)
    {
        _name = name;
        return this;
    }
    
    public TestModelBuilder WithType(string type)
    {
        _type = type;
        return this;
    }
    
    public IClassModel Build()
    {
        return new ClassModel(_name, _type);
    }
}

// Usage
var model = new TestModelBuilder()
    .WithName("TestClass")
    .WithType("Entity")
    .Build();

Debugging Test Failures

Local Debugging

1

Run tests in debug mode

In Visual Studio or Rider:
  • Set breakpoints in test or production code
  • Right-click test → Debug
  • Step through execution
2

Examine test output

Review detailed test output:
dotnet test --logger "console;verbosity=detailed"
3

Check generated files

For Intent Architect tests, examine generated files in test applications to see actual vs. expected output.

CI/CD Debugging

When tests fail in the pipeline:
  1. Review pipeline logs: Check the Azure DevOps pipeline output
  2. Download artifacts: Failed test runs publish artifacts with detailed logs
  3. Reproduce locally: Run the same test command locally
  4. Check for environment differences: Verify .NET versions, dependencies, etc.
# Reproduce pipeline test execution locally
dotnet test --results-directory ./TestResults \
  --logger trx \
  --blame-crash \
  --no-build \
  --filter Requirement!=CosmosDB

Continuous Integration

All tests run automatically on:
  • Pull Requests: Tests must pass before merging
  • Master Branch: Validates production-ready code
  • Development Branches: Early feedback on changes

Build Status

Monitor build status:
  • Build badge in README shows current status
  • Azure DevOps pipeline provides detailed results
  • Failed builds prevent module publishing

Next Steps

With testing knowledge:
  1. Write tests for your changes
  2. Ensure all tests pass locally
  3. Review the Pull Request Process to submit your contribution
Add tests first (TDD approach) to clarify requirements and ensure your implementation meets expectations.
Never skip tests or disable failing tests. If a test fails, either fix the code or update the test to reflect correct behavior.

Build docs developers (and LLMs) love