Skip to main content

Testing Philosophy

Aurora prioritizes practical testing approaches that ensure reliability without hindering development velocity. Before adding new test infrastructure, consult with the team to align on testing strategies.

Manual Testing

Manual testing is currently the primary testing method for Aurora. This guide covers how to effectively test your changes.

Starting the Test Environment

Start Aurora in development mode to test your changes:
make dev
This starts all services with hot reloading enabled:
  • Backend changes auto-reload in Flask
  • Frontend changes rebuild instantly with Turbopack

Viewing Logs

Monitor logs to debug issues:
# All services
make logs

# Specific service
make logs frontend
make logs aurora-server
make logs celery_worker

# Follow logs in real-time
docker logs -f aurora-celery_worker-1

Testing Checklist

When testing your changes, verify:

Backend Changes

  • Service starts successfully: No errors in startup logs
  • API endpoints respond: Test with curl or Postman
  • Database operations work: Check PostgreSQL logs
  • Error handling: Test with invalid inputs
  • Logging is appropriate: No excessive debug logs
  • Security: Verify authentication/authorization

Frontend Changes

  • Page loads without errors: Check browser console
  • Responsive design: Test on different screen sizes
  • User interactions work: Click buttons, submit forms
  • Error states: Test error handling and user feedback
  • Accessibility: Keyboard navigation, screen reader support
  • Cross-browser compatibility: Test on Chrome, Firefox, Safari

Integration Testing

  • End-to-end flows: Test complete user journeys
  • Cloud provider integrations: Test with actual credentials
  • WebSocket communication: Test chatbot interactions
  • Background tasks: Verify Celery tasks complete
  • State persistence: Check data saves correctly

Testing Cloud Integrations

AWS Integration Testing

  1. Configure credentials in Vault or .env:
AWS_ACCESS_KEY_ID=your_key
AWS_SECRET_ACCESS_KEY=your_secret
AWS_REGION=us-east-1
  1. Test resource listing:
curl -X GET http://localhost:5080/api/aws/ec2/instances \
  -H "Authorization: Bearer YOUR_JWT_TOKEN"
  1. Verify in UI: Navigate to AWS dashboard and check resources display

GCP Integration Testing

  1. Place service account JSON in server/connectors/gcp_connector/:
cp your-service-account.json server/connectors/gcp_connector/service-account.json
  1. Set project ID in .env:
GCP_PROJECT_ID=your-project-id
  1. Test via chatbot: Ask “List my GCP compute instances”

Azure Integration Testing

  1. Configure Azure credentials:
AZURE_TENANT_ID=your_tenant
AZURE_CLIENT_ID=your_client
AZURE_CLIENT_SECRET=your_secret
  1. Test subscription listing:
curl -X GET http://localhost:5080/api/azure/subscriptions \
  -H "Authorization: Bearer YOUR_JWT_TOKEN"

Testing the Chatbot

WebSocket Connection Testing

  1. Open browser console on http://localhost:3000/chat
  2. Check WebSocket connection: Look for connection established message
  3. Send test message: “Hello” should receive a response
  4. Monitor backend logs:
make logs chatbot

Agent Tool Testing

Test agent tools by asking specific questions:
# Cloud resource queries
"List my AWS EC2 instances"
"Show GCP billing for this month"
"What Azure resources are running?"

# Infrastructure operations
"Create a new GCP VM named test-vm"
"Delete the AWS instance i-1234567890abcdef0"
"Scale my Kubernetes deployment to 3 replicas"

# Knowledge base queries
"How do I set up AWS billing alerts?"
"What is the best practice for GCP networking?"

LangGraph Workflow Testing

Monitor the agent workflow in logs:
make logs chatbot
Look for:
  • Agent state transitions
  • Tool invocations
  • LLM API calls
  • Error handling

Testing Database Operations

PostgreSQL Testing

  1. Connect to database:
docker exec -it postgres psql -U aurora_user -d aurora_db
  1. Verify schema:
\dt  -- List tables
\d users  -- Describe users table
  1. Check data:
SELECT * FROM users;
SELECT * FROM projects;
  1. Exit:
\q

Weaviate Vector Database Testing

  1. Access Weaviate console: http://localhost:8080/v1
  2. Test semantic search:
curl http://localhost:8080/v1/objects
  1. Verify embeddings: Check that knowledge base documents are indexed

Testing Vault Integration

Store Test Secret

export VAULT_ADDR="http://localhost:8200"
export VAULT_TOKEN="your_root_token"

vault kv put aurora/users/test-secret \
  aws_access_key="AKIAIOSFODNN7EXAMPLE" \
  aws_secret_key="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"

Retrieve Test Secret

vault kv get aurora/users/test-secret

Test in Application

Verify secrets are resolved correctly:
# Check logs for secret resolution
make logs aurora-server | grep -i vault

Testing Storage (SeaweedFS)

Access Web Interface

Test S3 API

# Using AWS CLI with SeaweedFS
aws configure set aws_access_key_id admin
aws configure set aws_secret_access_key admin
aws configure set region us-east-1

# Create bucket
aws --endpoint-url http://localhost:8333 s3 mb s3://test-bucket

# Upload file
echo "test content" > test.txt
aws --endpoint-url http://localhost:8333 s3 cp test.txt s3://test-bucket/

# List files
aws --endpoint-url http://localhost:8333 s3 ls s3://test-bucket/

# Download file
aws --endpoint-url http://localhost:8333 s3 cp s3://test-bucket/test.txt downloaded.txt

Test in Application

from utils.storage.storage import get_storage_manager

storage = get_storage_manager()

# Upload test file
storage.upload_file(
    bucket="test-bucket",
    key="test/file.txt",
    file_path="/tmp/test.txt"
)

# Download test file
storage.download_file(
    bucket="test-bucket",
    key="test/file.txt",
    destination="/tmp/downloaded.txt"
)

Testing Celery Background Tasks

Monitor Task Queue

# View Celery worker logs
make logs celery_worker

# Connect to Redis CLI
docker exec -it redis redis-cli
AUTH your_redis_password

# Check queue length
LLEN celery

# Exit
EXIT

Trigger Test Task

Trigger a background task and monitor execution:
# Trigger billing update (example)
curl -X POST http://localhost:5080/api/billing/refresh \
  -H "Authorization: Bearer YOUR_JWT_TOKEN"

# Monitor task completion
make logs celery_worker

Testing Infrastructure Provisioning

Terraform Workflow Testing

  1. Request infrastructure via chatbot:
"Create a GCP VM with 2 CPUs and 4GB RAM"
  1. Review generated Terraform:
# Check Terraform files
ls /tmp/terraform_*
cat /tmp/terraform_*/main.tf
  1. Approve changes: Confirm in UI
  2. Monitor execution:
make logs celery_worker | grep -i terraform
  1. Verify in cloud console: Check resource was created
  2. Test cleanup:
# Cleanup script
make logs celery_worker | grep -i cleanup

Performance Testing

Monitor Resource Usage

# View container stats
docker stats

# Check memory usage
docker stats --no-stream --format "table {{.Name}}\t{{.MemUsage}}"

# Check CPU usage
docker stats --no-stream --format "table {{.Name}}\t{{.CPUPerc}}"

Database Query Performance

-- Enable query timing
\timing on

-- Run test query
SELECT * FROM projects WHERE user_id = 'test-user';

-- Check slow queries
SELECT * FROM pg_stat_statements 
ORDER BY mean_exec_time DESC 
LIMIT 10;

Frontend Performance

  1. Open Chrome DevTools (F12)
  2. Network tab: Check request timing
  3. Performance tab: Record page load
  4. Lighthouse: Run audit (Performance, Accessibility, Best Practices)

Security Testing

Authentication Testing

# Test without token (should fail)
curl -X GET http://localhost:5080/api/projects

# Test with invalid token (should fail)
curl -X GET http://localhost:5080/api/projects \
  -H "Authorization: Bearer invalid_token"

# Test with valid token (should succeed)
curl -X GET http://localhost:5080/api/projects \
  -H "Authorization: Bearer YOUR_JWT_TOKEN"

Authorization Testing

Test that users can only access their own resources:
  1. Create two test users
  2. Create projects for each user
  3. Verify User A cannot access User B’s projects

Input Validation Testing

Test with malicious inputs:
# SQL injection attempt
curl -X POST http://localhost:5080/api/projects \
  -H "Authorization: Bearer YOUR_JWT_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"name": "test'; DROP TABLE projects;--"}'

# XSS attempt
curl -X POST http://localhost:5080/api/projects \
  -H "Authorization: Bearer YOUR_JWT_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"name": "<script>alert('XSS')</script>"}'
Verify these attacks are properly sanitized.

Automated Testing (Future)

While Aurora currently focuses on manual testing, future test infrastructure may include:

Backend Testing (pytest)

# server/tests/test_projects.py
import pytest
from utils.db.db_utils import connect_to_db

def test_create_project():
    # Test implementation
    pass

def test_list_projects():
    # Test implementation
    pass

Frontend Testing (Playwright)

// client/tests/projects.spec.ts
import { test, expect } from '@playwright/test';

test('create new project', async ({ page }) => {
  await page.goto('http://localhost:3000/projects');
  await page.click('button:has-text("New Project")');
  await page.fill('input[name="name"]', 'Test Project');
  await page.click('button:has-text("Create")');
  await expect(page.locator('text=Test Project')).toBeVisible();
});

CI/CD Testing

Aurora includes CI/CD pipelines that run on every PR:
  • Environment validation: Checks Docker Compose files are in sync
  • Linting: Runs ESLint on frontend code
  • Build verification: Ensures containers build successfully
Monitor CI/CD status in your pull request.

Troubleshooting Test Issues

Services Won’t Start

# Check Docker status
docker ps -a

# View failed container logs
docker logs container_name

# Rebuild from scratch
make nuke
make dev

Database Connection Errors

# Check PostgreSQL is running
docker ps | grep postgres

# Check database logs
make logs postgres

# Verify credentials in .env
cat .env | grep POSTGRES

WebSocket Connection Failures

# Check chatbot is running
docker ps | grep chatbot

# View chatbot logs
make logs chatbot

# Check WebSocket URL in browser console
console.log(process.env.NEXT_PUBLIC_WEBSOCKET_URL)

Frontend Build Errors

# Clear Next.js cache
cd client
rm -rf .next
npm run build

# Reinstall dependencies
rm -rf node_modules package-lock.json
npm install

Backend Import Errors

# Verify Python dependencies
docker exec -it aurora-server pip list

# Rebuild server
make rebuild-server

Best Practices

  • Test early and often: Test changes as you make them
  • Test edge cases: Don’t just test the happy path
  • Test with real data: Use actual cloud credentials when possible
  • Document test scenarios: Record steps for reproducing issues
  • Clean up test resources: Delete test cloud resources after testing
  • Check logs: Always review logs for warnings/errors
  • Test cross-browser: Don’t assume all browsers behave the same
  • Test mobile: Verify responsive design on mobile devices

Next Steps

Development Setup

Set up your local development environment

Architecture

Learn about Aurora’s architecture

Contributing

Guidelines for contributing to Aurora

Build docs developers (and LLMs) love