Skip to main content

Overview

Performs a complete system purge by wiping all InfluxDB data, clearing in-memory state, and resetting the Degradation Index (DI) to 0.0. This is the nuclear option for starting fresh with a clean baseline.
This operation is IRREVERSIBLE. All historical sensor data, baseline profiles, trained models, and cumulative degradation state will be permanently deleted.

Endpoint

POST /system/purge

Purge Actions

The purge performs these operations in sequence:

1. Stop Background Tasks

Halts any running calibration, monitoring, or fault injection threads.

2. Delete InfluxDB Data

Attempts to delete all records in the sensor_data bucket.
InfluxDB Serverless v3 Limitation: Range deletes are not supported on serverless instances. The system writes a DI=0.0 reset point to overwrite stale data during hydration.

3. Write DI Reset Point

Writes a single point with degradation_index=0.0 to ensure clean hydration on next startup:
db.write_point(
    measurement="sensor_events",
    tags={"asset_id": "Motor-01", "asset_type": "motor", "source": "purge_reset"},
    fields={"degradation_index": 0.0},
    timestamp=datetime.now(timezone.utc)
)

4. Clear In-Memory State

Wipes the following dictionaries:
  • _sensor_history - Last 1000 sensor readings
  • _baselines - Baseline profiles per asset
  • _detectors - Legacy 1Hz Isolation Forest models
  • _batch_detectors - Batch 100Hz Isolation Forest models
  • _degradation_state - Cumulative DI tracking

5. Initialize Clean DI State

Pre-populates DI=0.0 for Motor-01 to prevent stale InfluxDB queries:
_degradation_state["Motor-01"] = {
    "degradation_index": 0.0,
    "total_cycles": 0,
    "last_damage_rate": 0.0,
    "hydrated": True
}

6. Reset System State

Returns to IDLE state with validation metrics zeroed.

Response

status
string
required
Always returns "purged" when operation completes successfully
message
string
required
Human-readable confirmation of purge actions
state
string
required
New system state: "IDLE"

Example Request

cURL
curl -X POST "https://predictive-maintenance-uhlb.onrender.com/system/purge"
Python
import requests

response = requests.post(
    "https://predictive-maintenance-uhlb.onrender.com/system/purge"
)

data = response.json()
print(f"Status: {data['status']}")
print(f"Message: {data['message']}")
JavaScript
const response = await fetch(
  'https://predictive-maintenance-uhlb.onrender.com/system/purge',
  { method: 'POST' }
);

const data = await response.json();
console.log(`Status: ${data.status}`);
console.log(`Message: ${data.message}`);

Example Response

200 OK
{
  "status": "purged",
  "message": "All data purged. InfluxDB bucket cleared, ML baselines wiped. System is IDLE.",
  "state": "IDLE"
}

Post-Purge State

After purge completes, the system returns to a clean slate:
ComponentState
System StateIDLE
Degradation Index0.0
Total Cycles0
Training Samples0
Healthy Stability100.0%
Fault Capture Rate100.0%
Baseline ProfilesNone (cleared)
ML ModelsNone (cleared)
Sensor HistoryEmpty
InfluxDB DataWiped (or DI=0.0 reset point only)

Next Steps

After purge, you must recalibrate before monitoring:
# 1. Purge system
curl -X POST "https://predictive-maintenance-uhlb.onrender.com/system/purge"

# 2. Verify IDLE state
curl "https://predictive-maintenance-uhlb.onrender.com/system/state"

# 3. Recalibrate
curl -X POST "https://predictive-maintenance-uhlb.onrender.com/system/calibrate"

# 4. Wait for MONITORING_HEALTHY state
curl "https://predictive-maintenance-uhlb.onrender.com/system/state"

# 5. Begin fault injection or normal monitoring
curl -X POST "https://predictive-maintenance-uhlb.onrender.com/system/inject-fault?fault_type=JITTER&severity=SEVERE"

Use Cases

Reset Demo

Return to clean state for fresh demo runs without restarting server

Clear Cumulative Damage

Reset DI=0.0 after long fault injection sessions that pushed DI → 1.0

Baseline Corruption

Wipe corrupted baseline and retrain from scratch

Test Automation

Reset state between automated test runs for reproducibility

State Persistence vs. Purge

What Survives Restart?

DataServer RestartPurge
InfluxDB Sensor Data✅ Persists❌ Deleted
Degradation Index (InfluxDB)✅ Persists❌ Reset to 0.0
Baseline Profiles❌ Lost❌ Deleted
ML Models❌ Lost❌ Deleted
Sensor History (RAM)❌ Lost❌ Deleted
Degradation State (RAM)❌ Lost (hydrated from DB)❌ Deleted
Restart behavior: DI is hydrated from InfluxDB on first GET /system/state call, so cumulative damage survives restarts (unless purged first).

DI Hydration After Purge

On next API call requiring degradation state:
def _ensure_degradation_state(asset_id: str):
    # Query InfluxDB for last DI
    last_di = db.query_latest_degradation_index(asset_id)  # Returns 0.0 after purge
    
    state = {
        "degradation_index": last_di,
        "total_cycles": 0,
        "last_damage_rate": 0.0,
        "hydrated": True
    }
    _degradation_state[asset_id] = state
    print(f"[DEGRADATION] Hydrated {asset_id}: DI={last_di:.6f}")
Expected log after purge:
[DEGRADATION] Hydrated Motor-01: DI=0.000000

Error Responses

This endpoint does not return errors under normal operation. It always attempts to purge and returns success.
If InfluxDB range delete fails (common on serverless v3), the system gracefully falls back to DI=0.0 reset point strategy.

Performance Characteristics

  • Latency: 200-500ms (depends on InfluxDB delete performance)
  • InfluxDB Deletes: Attempts full bucket wipe (may fail on serverless v3)
  • InfluxDB Writes: 1 point (DI=0.0 reset)
  • Memory Released: ~50MB (baselines + models)
  • Downtime: None (purge is synchronous, 1s)

Safety Considerations

Production Environments: Do NOT expose this endpoint to end users. Consider:
  • Restricting to admin-only via API key authentication
  • Adding confirmation dialog in UI (e.g., “Type ‘PURGE’ to confirm”)
  • Logging purge events for audit trails

Comparison with Other Reset Endpoints

EndpointState TransitionDI ResetData DeletedModels Cleared
POST /system/stop→ IDLE❌ No❌ No❌ No
POST /system/reset→ MONITORING_HEALTHY❌ No❌ No❌ No
POST /system/purge→ IDLE✅ Yes✅ Yes✅ Yes

Build docs developers (and LLMs) love