Skip to main content

Overview

The Intent.AI.UnitTests module uses AI to automatically implement unit tests for the Handler method of a Command or Query, based on full context from generated code and model metadata provided by Intent Architect. Unlike manual test writing, this module analyzes your handler implementation, dependencies, and domain logic to generate comprehensive test coverage including happy paths, edge cases, and error scenarios.
To reliably generate unit tests, this module should be used in conjunction with the Intent.UnitTesting module. The Intent.UnitTesting module generates the test infrastructure (e.g., test project and dependencies), while Intent.AI.UnitTests handles the test implementation.
To use this feature, ensure that the required User Settings have been completed — including a valid API key for your selected AI provider.

What Gets Generated

Comprehensive Test Implementations

The AI analyzes your handler and generates:
  • Multiple test methods covering different scenarios
  • Happy path tests for successful execution
  • Edge case tests for boundary conditions
  • Error scenario tests for exception handling
  • Proper mocking setup for all dependencies
  • Complete arrange-act-assert structure
  • Descriptive test names following conventions
Example generated tests:
Application.Tests/CreateCustomerCommandHandlerTests.cs
using Moq;
using Xunit;

public class CreateCustomerCommandHandlerTests
{
    private readonly Mock<ICustomerRepository> _customerRepositoryMock;
    private readonly CreateCustomerCommandHandler _handler;

    public CreateCustomerCommandHandlerTests()
    {
        _customerRepositoryMock = new Mock<ICustomerRepository>();
        _handler = new CreateCustomerCommandHandler(_customerRepositoryMock.Object);
    }

    [Fact]
    public async Task Handle_ValidCommand_CreatesCustomerSuccessfully()
    {
        // Arrange
        var command = new CreateCustomerCommand
        {
            Name = "John Doe",
            Email = "[email protected]",
            PhoneNumber = "+1234567890"
        };

        // Act
        var result = await _handler.Handle(command, CancellationToken.None);

        // Assert
        Assert.NotEqual(Guid.Empty, result);
        _customerRepositoryMock.Verify(
            x => x.Add(It.Is<Customer>(c => 
                c.Name == command.Name && 
                c.Email == command.Email
            )), 
            Times.Once
        );
        _customerRepositoryMock.Verify(
            x => x.UnitOfWork.SaveChangesAsync(It.IsAny<CancellationToken>()), 
            Times.Once
        );
    }

    [Fact]
    public async Task Handle_EmptyName_ThrowsValidationException()
    {
        // Arrange
        var command = new CreateCustomerCommand
        {
            Name = string.Empty,
            Email = "[email protected]"
        };

        // Act & Assert
        await Assert.ThrowsAsync<ValidationException>(
            () => _handler.Handle(command, CancellationToken.None)
        );
    }

    [Fact]
    public async Task Handle_InvalidEmail_ThrowsValidationException()
    {
        // Arrange
        var command = new CreateCustomerCommand
        {
            Name = "John Doe",
            Email = "invalid-email"
        };

        // Act & Assert
        await Assert.ThrowsAsync<ValidationException>(
            () => _handler.Handle(command, CancellationToken.None)
        );
    }

    [Fact]
    public async Task Handle_DuplicateEmail_ThrowsConflictException()
    {
        // Arrange
        var command = new CreateCustomerCommand
        {
            Name = "John Doe",
            Email = "[email protected]"
        };

        _customerRepositoryMock
            .Setup(x => x.FindAsync(
                It.IsAny<Expression<Func<Customer, bool>>>(), 
                It.IsAny<CancellationToken>()
            ))
            .ReturnsAsync(new Customer { Email = command.Email });

        // Act & Assert
        await Assert.ThrowsAsync<ConflictException>(
            () => _handler.Handle(command, CancellationToken.None)
        );
    }
}

Installation

Prerequisites

  • The Intent.UnitTesting module installed and configured
  • An Intent Architect application with implemented handlers
  • An AI provider account (OpenAI, Azure OpenAI, or Anthropic)
  • Valid API key configured in User Settings

Installation Steps

1

Install Intent.UnitTesting module

First, install the Intent.UnitTesting module if not already installed. This provides the test infrastructure.
2

Install Intent.AI.UnitTests module

In Intent Architect, right-click on your application and select Manage Modules. Search for Intent.AI.UnitTests and install it.
3

Configure AI provider

Go to ToolsUser SettingsAI Configuration and enter your API key.
4

Apply Unit Test stereotype

In the Services Designer, apply the Unit Test stereotype to Commands/Queries you want to test.
5

Run the Software Factory

Generate the test stub files and infrastructure.

Usage

Basic Workflow

1

Ensure handler is implemented

The handler method should have a complete implementation (manually written or AI-generated via Intent.AI.AutoImplementation).
2

Apply Unit Test stereotype

In the Services Designer, right-click on a Command or Query and apply the Unit Test stereotype.
3

Run the Software Factory

Execute the Software Factory to generate the test stub file.
4

Generate Unit Tests with AI

Right-click on the Command or Query and select Generate Unit Tests with AI.Unit Test Menu
5

Review and apply changes

The AI will generate comprehensive test implementations. Review the code diff and apply the changes.

Influencing Factors

The quality, relevance, and output location of the generated tests depend on several key factors.

Intent Modeling

Before using Generate Unit Tests with AI, make sure:
  • Generated Code is up-to-date: Run the Software Factory to apply all outstanding code changes
  • Command/Query is mapped: Ensure the Command or Query is associated with the appropriate Entity using a Create Entity, Update Entity, or Query Entity action
  • Unit Test infrastructure is Set Up: Ensure the Intent.UnitTesting module has been installed, and that the Command or Query has the Unit Test stereotype applied
  • Handler is implemented: The handler should have actual logic to test (not just an empty method)

Adjusting the Prompt

While Intent Architect supplies a default prompt and relevant file context to the AI provider, you can optionally provide additional context to refine the unit tests generated. Add custom instructions: Additional Prompt Example prompts:
Include tests for concurrent access scenarios with the same customer ID.
Add performance tests to ensure the query completes within 100ms.
Test all possible order status transitions.
It’s recommended to try the default implementation first. If needed, rerun with added context to improve results.
AI responses are not deterministic — each execution may produce different results. Use the additional context prompt to guide the AI toward your desired test coverage.

Examples

Testing a Create Command

Handler to Test:
CreateOrderCommandHandler.cs
public async Task<Guid> Handle(CreateOrderCommand request, CancellationToken cancellationToken)
{
    var customer = await _customerRepository.FindByIdAsync(request.CustomerId, cancellationToken);
    if (customer == null)
        throw new NotFoundException($"Customer {request.CustomerId} not found");

    var order = new Order
    {
        CustomerId = request.CustomerId,
        OrderDate = DateTime.UtcNow,
        Status = OrderStatus.Pending
    };

    _orderRepository.Add(order);
    await _orderRepository.UnitOfWork.SaveChangesAsync(cancellationToken);

    return order.Id;
}
AI-Generated Tests:
CreateOrderCommandHandlerTests.cs
public class CreateOrderCommandHandlerTests
{
    private readonly Mock<IOrderRepository> _orderRepositoryMock;
    private readonly Mock<ICustomerRepository> _customerRepositoryMock;
    private readonly CreateOrderCommandHandler _handler;

    public CreateOrderCommandHandlerTests()
    {
        _orderRepositoryMock = new Mock<IOrderRepository>();
        _customerRepositoryMock = new Mock<ICustomerRepository>();
        _handler = new CreateOrderCommandHandler(
            _orderRepositoryMock.Object,
            _customerRepositoryMock.Object
        );
    }

    [Fact]
    public async Task Handle_ValidRequest_CreatesOrder()
    {
        // Arrange
        var customerId = Guid.NewGuid();
        var customer = new Customer { Id = customerId, Name = "Test Customer" };
        _customerRepositoryMock
            .Setup(x => x.FindByIdAsync(customerId, default))
            .ReturnsAsync(customer);

        var command = new CreateOrderCommand { CustomerId = customerId };

        // Act
        var result = await _handler.Handle(command, CancellationToken.None);

        // Assert
        Assert.NotEqual(Guid.Empty, result);
        _orderRepositoryMock.Verify(
            x => x.Add(It.Is<Order>(o => 
                o.CustomerId == customerId && 
                o.Status == OrderStatus.Pending
            )), 
            Times.Once
        );
    }

    [Fact]
    public async Task Handle_NonExistentCustomer_ThrowsNotFoundException()
    {
        // Arrange
        var customerId = Guid.NewGuid();
        _customerRepositoryMock
            .Setup(x => x.FindByIdAsync(customerId, default))
            .ReturnsAsync((Customer?)null);

        var command = new CreateOrderCommand { CustomerId = customerId };

        // Act & Assert
        var exception = await Assert.ThrowsAsync<NotFoundException>(
            () => _handler.Handle(command, CancellationToken.None)
        );
        Assert.Contains(customerId.ToString(), exception.Message);
    }

    [Fact]
    public async Task Handle_ValidRequest_SavesChanges()
    {
        // Arrange
        var customerId = Guid.NewGuid();
        _customerRepositoryMock
            .Setup(x => x.FindByIdAsync(customerId, default))
            .ReturnsAsync(new Customer { Id = customerId });

        var command = new CreateOrderCommand { CustomerId = customerId };

        // Act
        await _handler.Handle(command, CancellationToken.None);

        // Assert
        _orderRepositoryMock.Verify(
            x => x.UnitOfWork.SaveChangesAsync(It.IsAny<CancellationToken>()), 
            Times.Once
        );
    }
}

Testing a Query Handler

Handler to Test:
GetCustomerByIdQueryHandler.cs
public async Task<CustomerDto> Handle(GetCustomerByIdQuery request, CancellationToken cancellationToken)
{
    var customer = await _customerRepository.FindByIdAsync(request.Id, cancellationToken);
    if (customer == null)
        throw new NotFoundException($"Customer {request.Id} not found");

    return new CustomerDto
    {
        Id = customer.Id,
        Name = customer.Name,
        Email = customer.Email,
        TotalOrders = customer.Orders.Count
    };
}
AI-Generated Tests:
GetCustomerByIdQueryHandlerTests.cs
public class GetCustomerByIdQueryHandlerTests
{
    private readonly Mock<ICustomerRepository> _customerRepositoryMock;
    private readonly GetCustomerByIdQueryHandler _handler;

    public GetCustomerByIdQueryHandlerTests()
    {
        _customerRepositoryMock = new Mock<ICustomerRepository>();
        _handler = new GetCustomerByIdQueryHandler(_customerRepositoryMock.Object);
    }

    [Fact]
    public async Task Handle_ExistingCustomer_ReturnsCustomerDto()
    {
        // Arrange
        var customerId = Guid.NewGuid();
        var customer = new Customer
        {
            Id = customerId,
            Name = "John Doe",
            Email = "[email protected]",
            Orders = new List<Order> 
            { 
                new Order(), 
                new Order() 
            }
        };

        _customerRepositoryMock
            .Setup(x => x.FindByIdAsync(customerId, default))
            .ReturnsAsync(customer);

        var query = new GetCustomerByIdQuery { Id = customerId };

        // Act
        var result = await _handler.Handle(query, CancellationToken.None);

        // Assert
        Assert.NotNull(result);
        Assert.Equal(customerId, result.Id);
        Assert.Equal("John Doe", result.Name);
        Assert.Equal("[email protected]", result.Email);
        Assert.Equal(2, result.TotalOrders);
    }

    [Fact]
    public async Task Handle_NonExistentCustomer_ThrowsNotFoundException()
    {
        // Arrange
        var customerId = Guid.NewGuid();
        _customerRepositoryMock
            .Setup(x => x.FindByIdAsync(customerId, default))
            .ReturnsAsync((Customer?)null);

        var query = new GetCustomerByIdQuery { Id = customerId };

        // Act & Assert
        var exception = await Assert.ThrowsAsync<NotFoundException>(
            () => _handler.Handle(query, CancellationToken.None)
        );
        Assert.Contains(customerId.ToString(), exception.Message);
    }

    [Fact]
    public async Task Handle_CustomerWithNoOrders_ReturnsZeroOrders()
    {
        // Arrange
        var customerId = Guid.NewGuid();
        var customer = new Customer
        {
            Id = customerId,
            Name = "Jane Doe",
            Email = "[email protected]",
            Orders = new List<Order>()
        };

        _customerRepositoryMock
            .Setup(x => x.FindByIdAsync(customerId, default))
            .ReturnsAsync(customer);

        var query = new GetCustomerByIdQuery { Id = customerId };

        // Act
        var result = await _handler.Handle(query, CancellationToken.None);

        // Assert
        Assert.Equal(0, result.TotalOrders);
    }
}

Code Changes Review

Once the AI Agent completes the task, suggested code changes will be displayed for review: Recommended Changes You can:
  • Review all generated test methods
  • Verify test coverage includes edge cases
  • Accept or reject the implementation
  • Rerun with additional context for more specific scenarios

Execution Output

Full logs of the execution, including the AI prompt and any errors, are available in the Execution tab: Execution Logs

Best Practices

Implement Handlers First

Ensure your handler has complete implementation before generating tests. Empty methods result in basic tests.

Review Generated Tests

AI-generated tests should be reviewed for correctness and alignment with your testing standards.

Use Custom Prompts

Guide the AI to cover specific scenarios relevant to your business logic.

Run Tests Frequently

Integrate generated tests into your CI/CD pipeline to catch regressions early.

Integration with Other Modules

Unit Testing

Provides the test infrastructure and project setup.

Auto Implementation

Generate handler implementations first, then generate tests.

Azure Pipelines

Run generated tests automatically in CI/CD.

FluentValidation

Tests include validation scenarios when validators are present.

Next Steps

Auto Implementation

Generate handler implementations with AI

Integration Tests

Add end-to-end API testing

CI/CD Setup

Automate test execution

Build docs developers (and LLMs) love