Skip to main content
The Currency Converter API uses PostgreSQL to persist supported currencies and maintain a historical record of exchange rates. This page documents the complete database schema, including table structures, indexes, and design decisions.

Schema overview

The database consists of two primary tables:
TablePurposeKey Indexes
supported_currenciesCurrencies available for conversionPrimary key on code
rate_historyHistorical exchange rate recordsfrom_currency, to_currency, timestamp
The schema prioritizes write efficiency for rate history (frequent inserts) and read efficiency for currency lookups (cached in Redis).

Table: supported_currencies

Stores the list of currencies supported by the API, determined by the intersection of all provider-supported currencies.

Schema definition

From infrastructure/persistence/models/currency.py:12:
class SupportedCurrencyDB(Base):
    __tablename__ = 'supported_currencies'

    code: Mapped[str] = mapped_column(String(5), primary_key=True)
    name: Mapped[str | None] = mapped_column(String(100), nullable=True)

SQL DDL equivalent

CREATE TABLE supported_currencies (
    code VARCHAR(5) PRIMARY KEY,
    name VARCHAR(100) NULLABLE
);

Column details

ColumnTypeConstraintsDescription
codeVARCHAR(5)PRIMARY KEYISO 4217 currency code (e.g., “USD”, “EUR”)
nameVARCHAR(100)NULLABLEHuman-readable currency name (optional)
Currency names are currently NULL as the system only uses codes for validation. Future versions may populate names for UI display.

Sample data

INSERT INTO supported_currencies (code, name) VALUES
    ('USD', NULL),
    ('EUR', NULL),
    ('GBP', NULL),
    ('JPY', NULL);

Table: rate_history

Stores every exchange rate fetched from providers, creating an audit trail and enabling historical analysis.

Schema definition

From infrastructure/persistence/models/currency.py:19:
class RateHistoryDB(Base):
    __tablename__ = 'rate_history'

    id: Mapped[int] = mapped_column(Integer, primary_key=True, autoincrement=True)
    from_currency: Mapped[str] = mapped_column(String(5), nullable=False)
    to_currency: Mapped[str] = mapped_column(String(5), nullable=False)
    rate: Mapped[Decimal] = mapped_column(DECIMAL(precision=18, scale=6), nullable=False)
    timestamp: Mapped[datetime] = mapped_column(DateTime, nullable=False, index=True)
    source: Mapped[str | None] = mapped_column(String(50), nullable=False)

    __table_args__ = (
        Index('idx_from_currency', 'from_currency'),
        Index('idx_to_currency', 'to_currency'),
        UniqueConstraint(
            'from_currency', 'to_currency', 'timestamp', name='uq_base_target_currency'
        ),
    )

SQL DDL equivalent

CREATE TABLE rate_history (
    id SERIAL PRIMARY KEY,
    from_currency VARCHAR(5) NOT NULL,
    to_currency VARCHAR(5) NOT NULL,
    rate DECIMAL(18, 6) NOT NULL,
    timestamp TIMESTAMP NOT NULL,
    source VARCHAR(50) NOT NULL,
    CONSTRAINT uq_base_target_currency UNIQUE (from_currency, to_currency, timestamp)
);

CREATE INDEX idx_from_currency ON rate_history(from_currency);
CREATE INDEX idx_to_currency ON rate_history(to_currency);
CREATE INDEX ix_rate_history_timestamp ON rate_history(timestamp);

Column details

ColumnTypeConstraintsDescription
idINTEGERPRIMARY KEY, AUTOINCREMENTUnique record identifier
from_currencyVARCHAR(5)NOT NULL, INDEXEDSource currency code
to_currencyVARCHAR(5)NOT NULL, INDEXEDTarget currency code
rateDECIMAL(18,6)NOT NULLExchange rate with 6 decimal precision
timestampDATETIMENOT NULL, INDEXEDWhen the rate was fetched
sourceVARCHAR(50)NOT NULLProvider name (e.g., “fixerio”, “averaged”)
  • 18 total digits: Supports extremely large or small rates (e.g., 1 USD = 150,000,000 VND)
  • 6 decimal places: Sufficient precision for all major currencies
  • Exact arithmetic: Unlike FLOAT, DECIMAL avoids rounding errors critical for financial calculations
Example: 0.925500 is stored exactly, not as 0.9255000000000001.

Indexes

The table includes three indexes for query optimization:
IndexColumnsPurpose
idx_from_currencyfrom_currencyFilter rates by source currency
idx_to_currencyto_currencyFilter rates by target currency
ix_rate_history_timestamptimestampTime-range queries for historical analysis
These indexes enable efficient queries like “Show all USD rates” or “Get rates fetched in the last hour”.

Unique constraint

The composite unique constraint prevents duplicate rate entries:
UniqueConstraint(
    'from_currency', 'to_currency', 'timestamp', name='uq_base_target_currency'
)
This ensures you cannot insert two rates for the same currency pair at the exact same timestamp.
Attempting to insert duplicate records raises an IntegrityError. The application should handle this gracefully or use INSERT ... ON CONFLICT DO NOTHING.

Sample data

INSERT INTO rate_history (from_currency, to_currency, rate, timestamp, source) VALUES
    ('USD', 'EUR', 0.925500, '2026-03-04 15:30:00', 'averaged'),
    ('USD', 'GBP', 0.785200, '2026-03-04 15:30:05', 'fixerio'),
    ('EUR', 'JPY', 160.450000, '2026-03-04 15:30:10', 'averaged');

Database initialization

Tables are created automatically at application startup using SQLAlchemy. From api/main.py:28:
deps.db = Database(settings.DATABASE_URL)
if deps.db is None:
    raise RuntimeError('Database not initialized')
await deps.db.create_tables()
logger.info('Database tables created')
The create_tables method uses SQLAlchemy’s metadata:
# From infrastructure/persistence/database.py
async def create_tables(self) -> None:
    async with self.engine.begin() as conn:
        await conn.run_sync(Base.metadata.create_all)
create_all is idempotent — it only creates tables if they don’t already exist. Safe to run on every startup.

Session management

The application uses SQLAlchemy’s async session factory with proper lifecycle management. From api/dependencies.py:82:
async def get_db_session() -> AsyncGenerator[AsyncSession, None]:
    if deps.db is None:
        raise RuntimeError('Database is not initialized')

    session = deps.db.session_factory()
    try:
        yield session
        await session.commit()
    except Exception:
        await session.rollback()
        raise
    finally:
        await session.close()
  1. Create: New session for each request via dependency injection
  2. Yield: Provide session to route handler
  3. Commit: Automatically commit on success
  4. Rollback: Automatically rollback on exception
  5. Close: Always close session in finally block
This pattern prevents connection leaks and ensures consistent transaction handling.

Data access patterns

Inserting a new rate

Rates are inserted after successful provider aggregation:
# From application/services/rate_service.py:51
await self.repository.save_rate(rate)
The repository method:
# From infrastructure/persistence/repositories/currency.py
async def save_rate(self, rate: ExchangeRate) -> None:
    db_rate = RateHistoryDB(
        from_currency=rate.from_currency,
        to_currency=rate.to_currency,
        rate=rate.rate,
        timestamp=rate.timestamp,
        source=rate.source,
    )
    self.db_session.add(db_rate)
    await self.db_session.flush()
    
    # Also update cache
    await self.cache_service.set_rate(rate)

Querying supported currencies

Currencies are typically read from cache, falling back to database:
# Check cache first
cached = await self.cache_service.get_supported_currencies()
if cached:
    return [SupportedCurrency(code=code, name=None) for code in cached]

# Cache miss: query database
result = await self.db_session.execute(select(SupportedCurrencyDB))
currencies = result.scalars().all()

# Update cache
await self.cache_service.set_supported_currencies([c.code for c in currencies])

return [SupportedCurrency(code=c.code, name=c.name) for c in currencies]

Performance considerations

Write performance

The system performs one insert per rate fetch. At high scale:
  • Batch inserts: Consider batching rate history writes for high-volume scenarios
  • Async commits: Already uses async I/O for non-blocking writes
  • Index overhead: Three indexes add minimal write overhead (~5-10%)

Read performance

  • Cache-first strategy: Database reads are rare (only on cache miss)
  • Indexed queries: All common filters use indexed columns
  • Connection pooling: Async engine uses connection pool (default 5 connections)

Typical read latency

< 10ms for cached reads, < 50ms for database reads

Typical write latency

< 20ms for single insert, < 100ms for batch of 10

Maintenance operations

Archiving old data

For long-running deployments, rate history can grow large. Consider periodic archiving:
-- Archive rates older than 90 days
CREATE TABLE rate_history_archive AS
SELECT * FROM rate_history WHERE timestamp < NOW() - INTERVAL '90 days';

DELETE FROM rate_history WHERE timestamp < NOW() - INTERVAL '90 days';

Analyzing query performance

-- Check index usage
EXPLAIN ANALYZE
SELECT * FROM rate_history
WHERE from_currency = 'USD'
  AND timestamp > NOW() - INTERVAL '1 day';

-- Should show "Index Scan" on idx_from_currency and ix_rate_history_timestamp

Vacuuming

PostgreSQL requires periodic vacuuming for optimal performance:
-- Analyze tables for query planner
ANALYZE supported_currencies;
ANALYZE rate_history;

-- Reclaim space from deleted rows
VACUUM rate_history;
Enable autovacuum in postgresql.conf for automatic maintenance.

Connection configuration

Database connection is configured via environment variable:
# From config/settings.py
DATABASE_URL: str = "postgresql+asyncpg://user:pass@localhost:5432/currency_db"

Connection pool settings

# From infrastructure/persistence/database.py
engine = create_async_engine(
    database_url,
    echo=False,
    pool_size=5,
    max_overflow=10,
    pool_pre_ping=True,
)
SettingValuePurpose
pool_size5Persistent connections
max_overflow10Additional connections during spikes
pool_pre_pingTrueTest connections before use (detect stale connections)

Migrations (future)

Currently, the schema is managed by SQLAlchemy’s create_all. For production, consider Alembic for schema migrations:
# Initialize Alembic
alembic init migrations

# Generate migration from models
alembic revision --autogenerate -m "initial schema"

# Apply migration
alembic upgrade head
create_all does not handle schema changes (adding columns, changing types). Use migrations for production deployments.

Backup and recovery

Backup strategy

# Full backup
pg_dump -U user -d currency_db -F c -f backup.dump

# Backup only rate_history (largest table)
pg_dump -U user -d currency_db -t rate_history -F c -f rate_history.dump

Restore

# Restore full database
pg_restore -U user -d currency_db backup.dump

# Restore single table
pg_restore -U user -d currency_db -t rate_history rate_history.dump
Since rates are also cached in Redis and fetched on-demand, you can safely truncate rate_history without affecting API functionality (only historical analytics are impacted).

Monitoring

Key metrics to track:
  • Table sizes: SELECT pg_size_pretty(pg_total_relation_size('rate_history'));
  • Index sizes: SELECT pg_size_pretty(pg_relation_size('idx_from_currency'));
  • Query performance: Enable pg_stat_statements extension
  • Connection pool: Monitor active/idle connections

Schema visualization

While foreign key constraints are not explicitly defined in SQLAlchemy models, the relationship is implicit: rates reference currencies that must exist in supported_currencies.

Build docs developers (and LLMs) love