Schema overview
The database consists of two primary tables:| Table | Purpose | Key Indexes |
|---|---|---|
supported_currencies | Currencies available for conversion | Primary key on code |
rate_history | Historical exchange rate records | from_currency, to_currency, timestamp |
The schema prioritizes write efficiency for rate history (frequent inserts) and read efficiency for currency lookups (cached in Redis).
Table: supported_currencies
Stores the list of currencies supported by the API, determined by the intersection of all provider-supported currencies.
Schema definition
Frominfrastructure/persistence/models/currency.py:12:
SQL DDL equivalent
Column details
| Column | Type | Constraints | Description |
|---|---|---|---|
code | VARCHAR(5) | PRIMARY KEY | ISO 4217 currency code (e.g., “USD”, “EUR”) |
name | VARCHAR(100) | NULLABLE | Human-readable currency name (optional) |
Currency names are currently
NULL as the system only uses codes for validation. Future versions may populate names for UI display.Sample data
Table: rate_history
Stores every exchange rate fetched from providers, creating an audit trail and enabling historical analysis.
Schema definition
Frominfrastructure/persistence/models/currency.py:19:
SQL DDL equivalent
Column details
| Column | Type | Constraints | Description |
|---|---|---|---|
id | INTEGER | PRIMARY KEY, AUTOINCREMENT | Unique record identifier |
from_currency | VARCHAR(5) | NOT NULL, INDEXED | Source currency code |
to_currency | VARCHAR(5) | NOT NULL, INDEXED | Target currency code |
rate | DECIMAL(18,6) | NOT NULL | Exchange rate with 6 decimal precision |
timestamp | DATETIME | NOT NULL, INDEXED | When the rate was fetched |
source | VARCHAR(50) | NOT NULL | Provider name (e.g., “fixerio”, “averaged”) |
Why DECIMAL(18,6) for rates?
Why DECIMAL(18,6) for rates?
- 18 total digits: Supports extremely large or small rates (e.g., 1 USD = 150,000,000 VND)
- 6 decimal places: Sufficient precision for all major currencies
- Exact arithmetic: Unlike
FLOAT,DECIMALavoids rounding errors critical for financial calculations
0.925500 is stored exactly, not as 0.9255000000000001.Indexes
The table includes three indexes for query optimization:| Index | Columns | Purpose |
|---|---|---|
idx_from_currency | from_currency | Filter rates by source currency |
idx_to_currency | to_currency | Filter rates by target currency |
ix_rate_history_timestamp | timestamp | Time-range queries for historical analysis |
These indexes enable efficient queries like “Show all USD rates” or “Get rates fetched in the last hour”.
Unique constraint
The composite unique constraint prevents duplicate rate entries:Sample data
Database initialization
Tables are created automatically at application startup using SQLAlchemy. Fromapi/main.py:28:
create_tables method uses SQLAlchemy’s metadata:
create_all is idempotent — it only creates tables if they don’t already exist. Safe to run on every startup.Session management
The application uses SQLAlchemy’s async session factory with proper lifecycle management. Fromapi/dependencies.py:82:
Understanding session lifecycle
Understanding session lifecycle
- Create: New session for each request via dependency injection
- Yield: Provide session to route handler
- Commit: Automatically commit on success
- Rollback: Automatically rollback on exception
- Close: Always close session in finally block
Data access patterns
Inserting a new rate
Rates are inserted after successful provider aggregation:Querying supported currencies
Currencies are typically read from cache, falling back to database:Performance considerations
Write performance
The system performs one insert per rate fetch. At high scale:- Batch inserts: Consider batching rate history writes for high-volume scenarios
- Async commits: Already uses async I/O for non-blocking writes
- Index overhead: Three indexes add minimal write overhead (~5-10%)
Read performance
- Cache-first strategy: Database reads are rare (only on cache miss)
- Indexed queries: All common filters use indexed columns
- Connection pooling: Async engine uses connection pool (default 5 connections)
Typical read latency
< 10ms for cached reads, < 50ms for database reads
Typical write latency
< 20ms for single insert, < 100ms for batch of 10
Maintenance operations
Archiving old data
For long-running deployments, rate history can grow large. Consider periodic archiving:Analyzing query performance
Vacuuming
PostgreSQL requires periodic vacuuming for optimal performance:Enable
autovacuum in postgresql.conf for automatic maintenance.Connection configuration
Database connection is configured via environment variable:Connection pool settings
| Setting | Value | Purpose |
|---|---|---|
pool_size | 5 | Persistent connections |
max_overflow | 10 | Additional connections during spikes |
pool_pre_ping | True | Test connections before use (detect stale connections) |
Migrations (future)
Currently, the schema is managed by SQLAlchemy’screate_all. For production, consider Alembic for schema migrations:
Backup and recovery
Backup strategy
Restore
Since rates are also cached in Redis and fetched on-demand, you can safely truncate
rate_history without affecting API functionality (only historical analytics are impacted).Monitoring
Key metrics to track:- Table sizes:
SELECT pg_size_pretty(pg_total_relation_size('rate_history')); - Index sizes:
SELECT pg_size_pretty(pg_relation_size('idx_from_currency')); - Query performance: Enable
pg_stat_statementsextension - Connection pool: Monitor active/idle connections
Schema visualization
While foreign key constraints are not explicitly defined in SQLAlchemy models, the relationship is implicit: rates reference currencies that must exist in
supported_currencies.