Skip to main content
TechCal uses Vitest for fast unit and integration testing. This guide covers running tests, writing new tests, and understanding the test infrastructure.

Running Tests

Run All Tests

Execute the entire test suite:
npm run test
This runs all unit and integration tests in src/**/__tests__/*.test.ts files.

Watch Mode

Run tests in interactive watch mode (automatically re-runs tests on file changes):
npm run test:watch
Watch mode is ideal for active development:
  • Tests re-run automatically when files change
  • Filter tests by filename or pattern
  • Press a to run all tests, f to run only failed tests

Coverage Report

Generate a code coverage report:
npm run test:coverage
Coverage reports are generated in the coverage/ directory:
  • coverage/index.html - Interactive HTML report
  • coverage/lcov.info - LCOV format for CI tools
TechCal uses @vitest/coverage-v8 for coverage collection. The configuration is in vitest.config.mts.

Specialized Test Suites

Scoring Algorithm Tests

Validate core recommendation algorithms:
npm run test:scoring
This runs:
  • Alignment Core Parity Tests (src/lib/__tests__/alignmentCore.parity.test.ts) - Validates base scoring logic
  • Filtered API Contract Tests (src/app/api/__tests__/filtered.contract.test.ts) - Ensures API consistency
  • Advanced Reranking Tests (src/services/recommendations/__tests__/rerankAdvanced.test.ts) - Tests behavioral reranking
Scoring tests must pass before deployment. These tests ensure recommendation quality and algorithm consistency.

Scoring Benchmarks

Measure scoring performance:
npm run bench:scoring
Runs performance benchmarks in scripts/benchmark-scoring.ts to ensure scoring algorithms meet performance budgets.

Test Infrastructure

Configuration

Vitest configuration is in vitest.config.mts:
import { defineConfig } from 'vitest/config';
import react from '@vitejs/plugin-react';

export default defineConfig({
  plugins: [react()],
  test: {
    globals: true,
    environment: 'jsdom',
    setupFiles: ['./vitest.setup.ts'],
    coverage: {
      provider: 'v8',
      reporter: ['text', 'json', 'html', 'lcov'],
    },
  },
});

Setup File

Global test setup in vitest.setup.ts:
  • Configures Testing Library matchers
  • Sets up jsdom environment
  • Mocks browser APIs
  • Loads environment variables

Test Utilities

TechCal provides test helpers:
import { render, screen } from '@testing-library/react';
import userEvent from '@testing-library/user-event';
import { expect, describe, it, vi } from 'vitest';

Writing Tests

Unit Test Example

Test a utility function:
src/utils/__tests__/dateUtils.test.ts
import { describe, it, expect } from 'vitest';
import { formatEventDate } from '../dateUtils';

describe('dateUtils', () => {
  describe('formatEventDate', () => {
    it('formats date correctly', () => {
      const date = new Date('2026-03-15T10:00:00Z');
      const formatted = formatEventDate(date);
      expect(formatted).toBe('Mar 15, 2026');
    });

    it('handles invalid dates', () => {
      expect(() => formatEventDate(null)).toThrow();
    });
  });
});

Component Test Example

Test a React component:
src/components/__tests__/EventCard.test.tsx
import { describe, it, expect } from 'vitest';
import { render, screen } from '@testing-library/react';
import userEvent from '@testing-library/user-event';
import { EventCard } from '../EventCard';

describe('EventCard', () => {
  const mockEvent = {
    id: '1',
    title: 'React Conf 2026',
    start_date: '2026-05-15',
    event_type: 'conference',
  };

  it('renders event title', () => {
    render(<EventCard event={mockEvent} />);
    expect(screen.getByText('React Conf 2026')).toBeInTheDocument();
  });

  it('calls onClick when clicked', async () => {
    const user = userEvent.setup();
    const handleClick = vi.fn();
    
    render(<EventCard event={mockEvent} onClick={handleClick} />);
    await user.click(screen.getByRole('article'));
    
    expect(handleClick).toHaveBeenCalledWith(mockEvent);
  });
});

Service Test Example

Test a service with mocked Supabase:
src/services/__tests__/eventService.test.ts
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { getEventById } from '../eventServices';
import { createClient } from '@supabase/supabase-js';

// Mock Supabase client
vi.mock('@supabase/supabase-js', () => ({
  createClient: vi.fn(() => ({
    from: vi.fn(() => ({
      select: vi.fn(() => ({
        eq: vi.fn(() => ({
          single: vi.fn(() => Promise.resolve({
            data: { id: '1', title: 'Test Event' },
            error: null,
          })),
        })),
      })),
    })),
  })),
}));

describe('eventService', () => {
  it('fetches event by id', async () => {
    const supabase = createClient('url', 'key');
    const event = await getEventById(supabase, '1');
    
    expect(event).toEqual({ id: '1', title: 'Test Event' });
  });
});

Scoring Algorithm Test

Test recommendation scoring:
src/lib/__tests__/alignmentCore.test.ts
import { describe, it, expect } from 'vitest';
import { computeEventScore } from '../recommendation/baseScorer';

describe('baseScorer', () => {
  const userProfile = {
    current_role: 'Software Engineer',
    career_goals: ['Learn AI'],
    skills: ['JavaScript', 'React'],
  };

  const event = {
    title: 'AI/ML Conference',
    skills_featured: ['Machine Learning', 'Python'],
    topics: ['Artificial Intelligence'],
  };

  it('scores event based on goal alignment', () => {
    const result = computeEventScore(event, userProfile);
    
    expect(result.goalScore).toBeGreaterThan(0.5);
    expect(result.overallScore).toBeGreaterThan(0);
    expect(result.triggers).toContain('goal_alignment');
  });

  it('includes skill matching in score', () => {
    const eventWithSkills = {
      ...event,
      skills_featured: ['React', 'JavaScript'],
    };
    
    const result = computeEventScore(eventWithSkills, userProfile);
    expect(result.skillScore).toBeGreaterThan(0.5);
  });
});

Test Organization

Directory Structure

src/
├── lib/
│   └── __tests__/
│       ├── alignmentCore.test.ts
│       └── alignmentCore.parity.test.ts
├── services/
│   ├── eventServices.ts
│   └── __tests__/
│       └── eventServices.test.ts
├── hooks/
│   ├── useCareerProfile.ts
│   └── __tests__/
│       └── useCareerProfile.test.ts
└── components/
    ├── EventCard.tsx
    └── __tests__/
        └── EventCard.test.tsx

Naming Conventions

  • Test files: *.test.ts or *.test.tsx
  • Test directory: __tests__/ alongside source files
  • Parity tests: *.parity.test.ts (validates algorithm consistency)
  • Contract tests: *.contract.test.ts (validates API interfaces)

Mocking Strategies

Mock Supabase Client

import { vi } from 'vitest';

const mockSupabase = {
  from: vi.fn(() => ({
    select: vi.fn(() => ({
      eq: vi.fn(() => Promise.resolve({ data: [], error: null })),
    })),
  })),
  auth: {
    getUser: vi.fn(() => Promise.resolve({ data: { user: null }, error: null })),
  },
};

Mock Environment Variables

import { vi, beforeEach, afterEach } from 'vitest';

let originalEnv: string | undefined;

beforeEach(() => {
  originalEnv = process.env.DISCOVERY_SCORING;
  process.env.DISCOVERY_SCORING = 'server';
});

afterEach(() => {
  process.env.DISCOVERY_SCORING = originalEnv;
});

Mock React Query

import { vi } from 'vitest';

vi.mock('@tanstack/react-query', () => ({
  useQuery: vi.fn(() => ({
    data: mockData,
    isLoading: false,
    error: null,
  })),
}));

Pre-Release Verification

Before deploying, run the complete verification suite:
npm run verify:all
This runs:
  1. Production verification - Validates production config
  2. Budget filtering tests - Ensures filtering works correctly
  3. Analytics validation - Checks telemetry integration
Always run npm run verify:all before deploying to production. This catches configuration issues and algorithm regressions.

Continuous Integration

TechCal runs tests automatically on CI:
.github/workflows/test.yml
name: Test
on: [push, pull_request]
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
        with:
          node-version: '20'
      - run: npm ci
      - run: npm run test
      - run: npm run test:scoring

Debugging Tests

Run Single Test File

npm run test src/lib/__tests__/alignmentCore.test.ts

Run Tests Matching Pattern

npm run test -- --grep="scoring"

Enable Debug Output

import { describe, it } from 'vitest';

describe('MyTest', () => {
  it('debugs values', () => {
    console.log('Debug output:', myValue);
    // Test continues...
  });
});

Use Vitest UI

For visual test debugging, install and run Vitest UI:
npm install -D @vitest/ui
npx vitest --ui

Common Issues

Tests Timeout

Increase timeout for slow tests:
it('slow operation', async () => {
  // Test code...
}, { timeout: 10000 }); // 10 seconds

Module Resolution Errors

Ensure vitest.config.mts has correct path aliases:
export default defineConfig({
  resolve: {
    alias: {
      '@': '/src',
    },
  },
});

jsdom Limitations

Some browser APIs aren’t available in jsdom. Mock them:
Object.defineProperty(window, 'matchMedia', {
  value: vi.fn(() => ({
    matches: false,
    addEventListener: vi.fn(),
    removeEventListener: vi.fn(),
  })),
});

Next Steps

E2E Tests

Run end-to-end tests with Playwright

Deployment

Deploy with confidence after testing

Build docs developers (and LLMs) love