Overview
Effective data management is critical for maintaining business continuity and compliance. This guide covers backup strategies, data export, data integrity, and maintenance tasks for Beils Dashboard.
Always test backup and restore procedures in a non-production environment before relying on them in production.
Database Structure
Beils Dashboard uses MariaDB (MySQL-compatible) with Prisma ORM. The database schema is organized into logical phases:
Users & Auth User accounts, roles, authentication tokens
Clients (CRM) Client records, consents, questionnaires, revokes
Catalog Products, services, packs, categories, brands, tags
Marketing Coupons, bonuses, client bonuses, gift cards
Sales Carts, cart items, payments, debts
Appointments Bookings with client and staff assignments
Data Relationships
The system uses referential integrity with cascading deletes:
// Example: Deleting a user cascades to their related data
model User {
consents Consent [] // CASCADE
questionnaires Questionnaire [] // CASCADE
revokes Revoke [] // CASCADE
// ... other relations
}
Understanding these relationships is crucial when managing data to avoid unintentional data loss.
Database Backups
Manual Backup
Create a complete database backup using mysqldump:
mysqldump -u username -p beils_dashboard > backup_ $( date +%Y%m%d_%H%M%S ) .sql
Backup includes:
All table structures
All data records
Indexes and constraints
Stored procedures (if any)
Include the timestamp in backup filenames to maintain multiple versions.
Compressed Backup
Save disk space with gzip compression:
mysqldump -u username -p beils_dashboard | gzip > backup_ $( date +%Y%m%d_%H%M%S ) .sql.gz
Automated Backups
Set up a daily backup cron job:
# Edit crontab
crontab -e
# Add daily backup at 2 AM
0 2 * * * /usr/bin/mysqldump -u username -p 'password' beils_dashboard | gzip > /backups/beils_ $( date + \% Y \% m \% d ) .sql.gz
Storing passwords in cron jobs is insecure. Use MySQL config files or credential management systems instead.
Backup Best Practices
Daily : Minimum for production systems
Hourly : High-transaction environments
Before updates : Always backup before schema changes
Keep last 7 daily backups
Keep last 4 weekly backups
Keep monthly backups for 1 year
Archive yearly backups indefinitely
Store on separate physical drive
Use cloud storage (S3, Google Cloud Storage)
Keep off-site copies for disaster recovery
Encrypt sensitive backup files
Test restore monthly
Verify backup file integrity
Check backup file sizes
Monitor backup success/failure
Database Restore
Complete Restore
Restore from a full database backup:
# Restore from uncompressed backup
mysql -u username -p beils_dashboard < backup_20260305.sql
# Restore from compressed backup
gunzip < backup_20260305.sql.gz | mysql -u username -p beils_dashboard
Restoring will overwrite all existing data. Ensure you have a current backup before restoring.
Selective Restore
Restore specific tables:
# Extract specific table from backup
mysqldump -u username -p beils_dashboard users > users_backup.sql
# Restore only that table
mysql -u username -p beils_dashboard < users_backup.sql
Data Export
Export User Data
Export all users to JSON:
curl -H "Authorization: Bearer YOUR_TOKEN" \
http://localhost:3000/api/users > users_export.json
Export Clients
curl -H "Authorization: Bearer YOUR_TOKEN" \
http://localhost:3000/api/clients > clients_export.json
Export Products
curl -H "Authorization: Bearer YOUR_TOKEN" \
http://localhost:3000/api/catalog/products > products_export.json
CSV Export
Use Prisma Client or SQL queries to export to CSV:
// Example: Export users to CSV
import { prisma } from './utils/prisma'
import { createWriteStream } from 'fs'
const users = await prisma . user . findMany ()
const csv = users . map ( u =>
` ${ u . user_id } , ${ u . email } , ${ u . name } , ${ u . role } , ${ u . status } `
). join ( ' \n ' )
const stream = createWriteStream ( 'users.csv' )
stream . write ( 'ID,Email,Name,Role,Status \n ' )
stream . write ( csv )
stream . end ()
Or export directly from MySQL:
SELECT * FROM users
INTO OUTFILE '/tmp/users.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n' ;
Data Import
Import Users
Bulk create users via API:
const users = await fetch ( './users_import.json' )
const userData = await users . json ()
for ( const user of userData ) {
await fetch ( 'http://localhost:3000/api/users' , {
method: 'POST' ,
headers: {
'Authorization' : `Bearer ${ token } ` ,
'Content-Type' : 'application/json'
},
body: JSON . stringify ( user )
})
}
Import from CSV
Load CSV data into database:
LOAD DATA LOCAL INFILE '/tmp/products.csv'
INTO TABLE products
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 ROWS ; -- Skip header row
Data Seeding
Populate the database with test data for development:
The seed script (seeds/seed-db.ts) should create:
Test users (ADMIN and USER roles)
Sample clients
Product catalog with categories
Services and packs
Marketing items (coupons, bonuses)
Customize the seed file to match your business’s products and services.
Database Migrations
Creating Migrations
After modifying prisma/schema.prisma:
This:
Generates SQL migration file
Applies changes to database
Updates Prisma Client
Migrations can cause data loss if not carefully reviewed. Always backup before running migrations in production.
Migration History
View applied migrations:
pnpx prisma migrate status
Rolling Back Migrations
Prisma doesn’t support automatic rollbacks. To revert:
Restore from backup before migration
Or manually write SQL to undo changes
Data Integrity
Referential Integrity
The database enforces relationships:
// User deletion cascades to related consents
model Consent {
user_id String @db.Char ( 100 )
user User @relation ( fields : [ user_id ], references : [ user_id ], onDelete : Cascade )
}
Cascade behaviors:
Cascade - Delete related records
SetNull - Set foreign key to null
Restrict - Prevent deletion if related records exist
Data Validation
Validation happens at multiple levels:
Database Constraints
Unique constraints on email addresses
NOT NULL requirements
Foreign key relationships
Prisma Schema
Type checking (String, Int, DateTime, etc.)
Default values
Field lengths (@db.VarChar(50))
API Layer
Zod schema validation
Business logic checks
Authentication and authorization
Frontend Validation
Form input validation
Type safety with TypeScript
Real-time feedback to users
Unique Constraints
Important unique fields to monitor:
email String @unique // User email addresses
sku String @unique // Product SKUs
barcode String @unique // Product barcodes
code String @unique // Service/pack codes, coupon codes, gift card codes
Attempting to create duplicates will throw an error.
Data Archiving
Soft Deletes
Instead of hard deletes, use status flags:
// Mark user as inactive instead of deleting
await prisma . user . update ({
where: { user_id: id },
data: { status: 'OFF' }
})
Benefits:
Preserve historical data
Maintain referential integrity
Enable data recovery
Support audit trails
Archive Old Data
Move old records to archive tables:
-- Archive completed carts older than 2 years
INSERT INTO carts_archive
SELECT * FROM carts
WHERE status = 'completed'
AND created_at < DATE_SUB( NOW (), INTERVAL 2 YEAR );
-- Delete from main table
DELETE FROM carts
WHERE status = 'completed'
AND created_at < DATE_SUB( NOW (), INTERVAL 2 YEAR );
Run archive operations during low-traffic hours to minimize performance impact.
Data Cleanup
Remove Test Data
Delete test records created during development:
-- Delete users with test email domains
DELETE FROM users WHERE email LIKE '%@test.com' ;
-- Remove products with test SKUs
DELETE FROM products WHERE sku LIKE 'TEST%' ;
Clean Up Orphaned Records
Find records without valid foreign keys:
-- Find carts without valid user references
SELECT * FROM carts c
LEFT JOIN users u ON c . user_id = u . user_id
WHERE c . user_id IS NOT NULL AND u . user_id IS NULL ;
Prisma’s cascade rules should prevent orphaned records, but verify after manual database operations.
Database Indexing
Important indexes for query performance:
-- Index frequently queried fields
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_products_sku ON products(sku);
CREATE INDEX idx_carts_status ON carts( status );
CREATE INDEX idx_bookings_date ON bookings(booking_date);
Query Optimization
Use Prisma’s query optimization features:
// Select only needed fields
const users = await prisma . user . findMany ({
select: {
user_id: true ,
email: true ,
name: true ,
role: true ,
status: true
}
})
// Use pagination for large datasets
const products = await prisma . product . findMany ({
skip: ( page - 1 ) * pageSize ,
take: pageSize ,
orderBy: { created_at: 'desc' }
})
Connection Pooling
Prisma automatically manages connection pooling. Configure in DATABASE_URL:
DATABASE_URL = "mysql://user:pass@host:3306/db?connection_limit=10"
Compliance and Privacy
GDPR Compliance
For European users:
Export all data for a specific user: const userData = await prisma . user . findUnique ({
where: { user_id: id },
include: {
consents: true ,
questionnaires: true ,
carts: true ,
debts: true
}
})
Delete user and all related data: // Cascades to consents, questionnaires, revokes
await prisma . user . delete ({
where: { user_id: id }
})
Provide data in JSON format via API endpoints
Track consents with the Consent model:
Document URL
Signed date
Status (active/revoked)
Data Retention Policies
Define how long different data types are kept:
User accounts : Inactive for 2 years → archive
Completed transactions : Keep for 7 years (tax compliance)
Client appointments : Keep historical records indefinitely
Marketing data : Review and clean up annually
Logs : Rotate monthly, archive for 1 year
Monitoring and Maintenance
Database Health Checks
Regular maintenance tasks:
-- Check table sizes
SELECT
table_name,
ROUND (((data_length + index_length) / 1024 / 1024 ), 2 ) AS "Size (MB)"
FROM information_schema . TABLES
WHERE table_schema = 'beils_dashboard'
ORDER BY (data_length + index_length) DESC ;
-- Optimize tables
OPTIMIZE TABLE users, products, carts;
-- Check for fragmentation
SHOW TABLE STATUS FROM beils_dashboard;
Storage Monitoring
Monitor disk usage:
# Check database directory size
du -sh /var/lib/mysql/beils_dashboard
# Monitor available disk space
df -h
Set up alerts for low disk space (below 20%) to prevent database write failures.
Disaster Recovery
Recovery Plan
Assess Damage
Determine extent of data loss or corruption
Identify Last Good Backup
Find the most recent backup before the incident
Restore Database
Apply backup to database server
Verify Data Integrity
Check critical tables and relationships
Replay Transactions
If possible, replay transactions since backup
Resume Operations
Return system to production after verification
Recovery Time Objective (RTO)
Target time to restore service:
Critical : 1 hour
High : 4 hours
Normal : 24 hours
Recovery Point Objective (RPO)
Maximum acceptable data loss:
Daily backups : Up to 24 hours of data loss
Hourly backups : Up to 1 hour of data loss
Real-time replication : Minimal data loss
Adjust backup frequency based on your RPO requirements.
Configuration Database and system configuration
User Management Manage user accounts and data
Installation Initial database setup
API Reference API endpoints for data access