Skip to main content

Database Architecture

The Invernaderos API implements a dual database strategy optimized for IoT time-series data and metadata management.

Overview

TimescaleDB

Time-series sensor readingsPort: 30432

PostgreSQL

Metadata and reference dataPort: 30433

Database Strategy

Why Two Databases?

The system separates hot time-series data from cold reference data for optimal performance:
  • TimescaleDB: Optimized for high-frequency sensor writes (1000s/sec) with automatic compression
  • PostgreSQL: Traditional ACID database for business entities with complex relationships
Both databases run PostgreSQL 16 under the hood. TimescaleDB is a PostgreSQL extension that adds time-series superpowers.

TimescaleDB (Time-Series)

Connection Details:
host
string
default:"localhost"
Database host
port
number
default:"30432"
TimescaleDB port (NodePort)
database
string
default:"greenhouse_timeseries_prod"
Database name
schema
string
default:"iot"
Primary schema for sensor data

Schema Structure

iot/
├── readings                    -- Simplified sensor readings (BIGINT device IDs)
├── sensor_readings             -- Legacy format (VARCHAR sensor IDs)
├── sensor_readings_hourly      -- Pre-aggregated hourly stats
├── sensor_readings_daily       -- Daily aggregations
└── sensor_readings_monthly     -- Monthly aggregations

staging/
├── sensor_readings_raw         -- Bulk import staging area
├── sensor_readings_validated   -- Validated staging data
├── bulk_import_log             -- Import audit trail
└── validation_rules            -- Configurable validation thresholds

Core Table: iot.readings

Continuous Aggregates

TimescaleDB automatically pre-computes hourly statistics for faster dashboard queries.

PostgreSQL (Metadata)

Connection Details:
host
string
default:"localhost"
Database host
port
number
default:"30433"
PostgreSQL port (NodePort)
database
string
default:"postgres"
Database name
schema
string
default:"metadata"
Primary schema for business entities

Schema Structure

metadata/
├── tenants                 -- Multi-tenant root table
├── users                   -- User accounts
├── greenhouses             -- Greenhouse facilities
├── sectors                 -- Greenhouse subdivisions
├── devices                 -- IoT devices (sensors + actuators)
├── alerts                  -- Alert history
├── settings                -- Greenhouse settings
├── command_history         -- Device command audit log

├── units                   -- Measurement units catalog (°C, %, hPa)
├── device_types            -- Device type catalog (TEMPERATURE, HUMIDITY)
├── actuator_states         -- Actuator states (ON, OFF, AUTO, ERROR)
├── alert_types             -- Alert types (THRESHOLD, SENSOR_OFFLINE)
├── alert_severities        -- Severity levels (INFO, WARNING, CRITICAL)
├── data_types              -- Data type validation rules
├── periods                 -- Time periods (DAY, NIGHT, ALL)
└── device_categories       -- Categories (SENSOR, ACTUATOR)

Core Tables


Entity Relationships

Foreign Key Relationships

-- All entities link to tenant
ALTER TABLE metadata.users 
    ADD CONSTRAINT fk_users_tenant 
    FOREIGN KEY (tenant_id) REFERENCES tenants(id) ON DELETE CASCADE;

ALTER TABLE metadata.greenhouses 
    ADD CONSTRAINT fk_greenhouses_tenant 
    FOREIGN KEY (tenant_id) REFERENCES tenants(id) ON DELETE CASCADE;

ALTER TABLE metadata.devices 
    ADD CONSTRAINT fk_devices_tenant 
    FOREIGN KEY (tenant_id) REFERENCES tenants(id) ON DELETE CASCADE;

Migration History

V15: Password Reset Fields

ALTER TABLE metadata.users 
    ADD COLUMN reset_password_token VARCHAR(255),
    ADD COLUMN reset_password_token_expiry TIMESTAMPTZ;

V16: Units Catalog Table

Created metadata.units table with 21 measurement units (°C, %, hPa, ppm, lux, etc.)

V17: Actuator States Catalog

Created metadata.actuator_states table (OFF, ON, AUTO, MANUAL, ERROR, MAINTENANCE, etc.)

V18: Device Types Catalog

Created metadata.device_types table with 27 types (TEMPERATURE, HUMIDITY, VENTILATOR, HEATER, etc.)

V19: Sectors Table

Created metadata.sectors table for greenhouse subdivisions with crop management

V20: Device Name Field

Added name column to devices table for human-readable names

V21: Catalog ID Auto-Generation

Added sequences for auto-generating catalog IDs (units, device_types, etc.)

V22-V25: UUID to BIGINT Migration

Major refactoring:
  • Migrated all UUID primary keys to BIGINT for 40% performance improvement
  • Created mapping tables for UUID → BIGINT conversion
  • Preserved all existing data during migration
  • Affected tables: tenants, users, greenhouses, sectors, devices, alerts, settings, command_history
Performance Impact:
  • Index size reduced by 50%
  • JOIN queries 40% faster
  • Auto-increment IDs instead of UUID generation overhead

V26-V30: Settings and Catalog Refinements

  • V26: Replaced period with actuator_state in settings
  • V27: Created data_types catalog table
  • V28: Changed greenhouse_id to sector_id in alerts and settings
  • V29: Added description fields and nullable message
  • V30: Renamed sectors.variety to sectors.name

V31: Seed All Initial Data

Populated catalog tables with:
  • 21 units
  • 10 actuator states
  • 6 alert types
  • 4 alert severities
  • 9 data types
  • 3 periods (DAY, NIGHT, ALL)
  • 34 device types (13 sensors + 21 actuators)
  • 2 MQTT system accounts (admin, api_spring_boot)

Spring Boot Configuration

Dual DataSource Setup

@Configuration
@EnableJpaRepositories(
    basePackages = ["com.apptolast.invernaderos.features.telemetry.timeseries"],
    entityManagerFactoryRef = "timescaleEntityManagerFactory",
    transactionManagerRef = "timescaleTransactionManager"
)
class TimescaleDataSourceConfig {
    
    @Primary
    @Bean(name = ["timescaleDataSource"])
    @ConfigurationProperties("spring.datasource.configuration")
    fun timescaleDataSource(properties: DataSourceProperties): HikariDataSource {
        return properties
            .initializeDataSourceBuilder()
            .type(HikariDataSource::class.java)
            .build()
    }
    
    @Primary
    @Bean(name = ["timescaleTransactionManager"])
    fun timescaleTransactionManager(
        entityManagerFactory: EntityManagerFactory
    ): PlatformTransactionManager {
        return JpaTransactionManager(entityManagerFactory)
    }
}

Using the Correct Transaction Manager

Always specify the correct @Transactional qualifier when working with dual datasources!
// ✅ CORRECT: TimescaleDB transaction
@Transactional("timescaleTransactionManager")
fun saveSensorReading(reading: Reading) {
    readingRepository.save(reading)
}

// ✅ CORRECT: PostgreSQL transaction
@Transactional("metadataTransactionManager")
fun createGreenhouse(greenhouse: Greenhouse) {
    greenhouseRepository.save(greenhouse)
}

// ❌ WRONG: No qualifier (will use default, may cause errors)
@Transactional
fun saveData() {
    // Which database???
}

Performance Optimization

TimescaleDB Best Practices

Compression

Automatically compresses chunks after 7 daysSavings: 90% storage reduction

Retention Policies

Automatically drops data older than 2 yearsBenefit: Prevents unbounded growth

Continuous Aggregates

Pre-computed hourly/daily statisticsSpeed: 60x faster dashboard queries

Hypertables

Automatic partitioning by time (7-day chunks)Result: Consistent query performance

Query Optimization Tips

-- ✅ Filter by time + device_id (indexed)
SELECT time, value 
FROM iot.readings
WHERE device_id = 123
  AND time >= NOW() - INTERVAL '1 day'
ORDER BY time DESC
LIMIT 100;

Caching Strategy

Redis Sorted Set implementation

Migrations

Flyway migration history

API Reference

REST endpoints for data access

Build docs developers (and LLMs) love