Skip to main content
Geni can automatically generate a schema.sql file after running migrations, giving you a complete snapshot of your database structure that can be committed to version control. This makes it easy to see your current schema without reading through individual migration files.

How Schema Dumping Works

When enabled, Geni dumps the complete database schema after each successful migration operation (geni up or geni down). The schema file includes all tables, indexes, constraints, views, and other database objects. From the migration code (src/lib/migrate.rs:66-70):
if dump_schema {
    if let Err(err) = database.dump_database_schema().await {
        log::error!("Skipping dumping database schema: {:?}", err);
    }
}
Schema dumping is non-blocking - if it fails, your migration still succeeds. The error is logged but doesn’t affect migration status.

Enabling Schema Dumps

Schema dumping is controlled by the dump_schema parameter when running Geni:

Command Line

# Schema dumping is enabled by default when using geni up/down
geni up

Programmatic Usage

use geni;

#[tokio::main]
async fn main() {
    geni::migrate_database(
        "postgres://localhost/myapp".to_string(),
        None,
        "schema_migrations".to_string(),
        "./migrations".to_string(),
        "schema.sql".to_string(),
        Some(30),
        true,  // Enable schema dumping
    )
    .await
    .unwrap();
}

Disabling Schema Dumps

Set the environment variable:
export DATABASE_NO_DUMP_SCHEMA=true
geni up

Schema File Location

By default, the schema file is created in your migrations folder:
migrations/
├── 1234567890_create_users.up.sql
├── 1234567890_create_users.down.sql
├── 1234567891_add_posts.up.sql
├── 1234567891_add_posts.down.sql
└── schema.sql  ← Generated schema dump
Customize the filename:
export DATABASE_SCHEMA_FILE="db_schema.sql"
geni up
Or in GitHub Actions:
- uses: emilpriver/geni@main
  with:
    migrations_folder: "./migrations"
    schema_file: "current_schema.sql"
    database_url: ${{ secrets.DATABASE_URL }}

Schema Format by Database

Each database type has its own schema dump implementation:

PostgreSQL

PostgreSQL schemas are generated using pure SQL queries - no external dependencies required. From src/lib/database_drivers/postgres.rs:187-445, the schema includes: Extensions:
-- EXTENSIONS

CREATE EXTENSION IF NOT EXISTS "uuid-ossp" WITH SCHEMA public;
CREATE EXTENSION IF NOT EXISTS "pgcrypto" WITH SCHEMA public;
Tables:
-- TABLES

CREATE TABLE users (
 id integer NOT NULL,
 name character varying NOT NULL,
 email character varying NOT NULL,
 created_at timestamp without time zone
);
Views:
-- VIEWS

CREATE VIEW active_users AS
SELECT * FROM users WHERE status = 'active';
Constraints:
-- CONSTRAINTS

ALTER TABLE users ADD CONSTRAINT users_pkey PRIMARY KEY (id);
ALTER TABLE orders ADD CONSTRAINT orders_user_id_fkey FOREIGN KEY (user_id) REFERENCES users(id);
Indexes:
-- INDEXES

CREATE INDEX idx_users_email ON public.users USING btree (email);
CREATE UNIQUE INDEX users_email_key ON public.users USING btree (email);
Sequences:
-- SEQUENCES

CREATE SEQUENCE users_id_seq AS integer START WITH 1 MINVALUE 1 MAXVALUE 2147483647 INCREMENT BY 1 CYCLE;
PostgreSQL schema dumping works without any external dependencies - it uses SQL queries against information_schema and PostgreSQL system catalogs.

SQLite

SQLite schemas are extracted using the sqlite_master table. From src/lib/database_drivers/sqlite.rs:160-207:
let mut result = self
    .db
    .query("SELECT sql FROM sqlite_master", params![])
    .await?;

let mut schemas: Vec<String> = vec![];
while let Some(row) = result.next().await.unwrap_or(None) {
    if let Ok(r) = row.get_str(0) {
        let text = r
            .to_string()
            .trim_start_matches('"')
            .trim_end_matches('"')
            .to_string()
            .replace("\\n", "\n");
        schemas.push(format!("{};" text));
    }
}
Example SQLite schema.sql:
--
-- Sqlite SQL Schema dump automatic generated by geni
--

CREATE TABLE users (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    name TEXT NOT NULL,
    email TEXT UNIQUE NOT NULL
);

CREATE INDEX idx_users_email ON users(email);

CREATE TABLE posts (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    user_id INTEGER NOT NULL,
    title TEXT NOT NULL,
    FOREIGN KEY (user_id) REFERENCES users(id)
);

LibSQL

LibSQL uses the same schema dumping approach as SQLite since it’s a fork of SQLite.

MySQL

MySQL schema dumping requires the mysqldump binary to be installed.
MySQL Dependency: MySQL schema dumping requires mysqldump to be installed on your system. This is pre-installed in the official Geni Docker image but may need to be installed separately in other environments.
# Check if mysqldump is available
which mysqldump

# Install on Ubuntu/Debian
sudo apt-get install mysql-client

# Install on macOS
brew install mysql-client

MariaDB

MariaDB schema dumping requires the mariadb-dump binary.
MariaDB Dependency: MariaDB schema dumping requires mariadb-dump to be installed. This is pre-installed in the official Geni Docker image but may need to be installed separately in other environments.
# Check if mariadb-dump is available  
which mariadb-dump

# Install on Ubuntu/Debian
sudo apt-get install mariadb-client

# Install on macOS
brew install mariadb

Database Requirements Summary

DatabaseExternal DependencyNotes
PostgreSQL✅ NonePure SQL implementation
SQLite✅ NoneUses sqlite_master table
LibSQL✅ NoneUses same approach as SQLite
MySQL❌ Requires mysqldumpPre-installed in Docker image
MariaDB❌ Requires mariadb-dumpPre-installed in Docker image
From the README:
Dumping needs another binaries to work:
- Postgres: Works without need for another binary. Uses SQL code to get schema
- MySQL: `mysqldump` need to be installed(already installed in docker)
- MariaDB: `mariadb-dump` need to be installed(already installed in docker)
- Sqlite: Works without need for another binary. Uses SQL code to get schema
- LibSQL: Works without need for another binary. Uses SQL code to get schema

Using in Version Control

The primary use case for schema dumping is tracking your database structure in Git:

Workflow

# 1. Create a new migration
geni new add_user_preferences

# 2. Edit the migration files
vim migrations/*_add_user_preferences.up.sql
vim migrations/*_add_user_preferences.down.sql

# 3. Apply the migration (schema.sql is automatically updated)
geni up

# 4. Commit everything together
git add migrations/
git commit -m "Add user preferences table"

Git Diff Example

When you commit, you’ll see both the migration and the schema change:
# New migration files
+ migrations/1709123456_add_user_preferences.up.sql
+ migrations/1709123456_add_user_preferences.down.sql

# Updated schema
migrations/schema.sql:
+ CREATE TABLE user_preferences (
+     id SERIAL PRIMARY KEY,
+     user_id INTEGER NOT NULL REFERENCES users(id),
+     theme VARCHAR(20) DEFAULT 'light',
+     notifications_enabled BOOLEAN DEFAULT true
+ );
+ 
+ CREATE INDEX idx_user_preferences_user_id ON user_preferences(user_id);
Reviewing the schema.sql diff in pull requests makes it easy to understand the cumulative effect of migrations without reading through individual migration files.

Manual Schema Dumps

You can also trigger a schema dump manually:
geni dump
This is useful when:
  • You want to refresh the schema file without running migrations
  • You made manual schema changes (not recommended) and want to capture them
  • You’re setting up schema dumping for an existing project
From the dump implementation (src/lib/dump.rs:4-26):
pub async fn dump(
    database_url: String,
    database_token: Option<String>,
    migrations_table: String,
    migrations_folder: String,
    schema_file: String,
    wait_timeout: Option<usize>,
) -> Result<()> {
    let mut database = database_drivers::new(
        database_url,
        database_token,
        migrations_table,
        migrations_folder,
        schema_file,
        wait_timeout,
        true,
    )
    .await?;

    database.dump_database_schema().await?;

    Ok(())
}

Docker Usage

Standard Docker Image

The standard Geni Docker image includes all necessary dump utilities:
docker run --rm -it \
  -v "$(pwd)/migrations:/migrations" \
  -e DATABASE_URL="postgres://[email protected]:5432/myapp" \
  ghcr.io/emilpriver/geni:latest up
This image includes:
  • pg_dump for PostgreSQL
  • mysqldump for MySQL
  • mariadb-dump for MariaDB

Slim Docker Image

The slim image does not include dump utilities:
docker run --rm -it \
  -v "$(pwd)/migrations:/migrations" \
  -e DATABASE_URL="sqlite://./db.sqlite" \
  -e DATABASE_NO_DUMP_SCHEMA=true \
  ghcr.io/emilpriver/geni:latest-slim up
From the README: “This image won’t try to dump the database” - use the slim image when you don’t need schema dumping or are using SQLite/LibSQL/PostgreSQL (which don’t require external tools).

Schema File Contents

The schema file is a complete, executable SQL file that recreates your database structure:
--
-- Postgres SQL Schema dump automatic generated by geni
--

-- EXTENSIONS

CREATE EXTENSION IF NOT EXISTS "uuid-ossp" WITH SCHEMA public;

-- TABLES

CREATE TABLE users (
 id integer NOT NULL,
 name character varying NOT NULL,
 email character varying NOT NULL,
 created_at timestamp without time zone
);

CREATE TABLE posts (
 id integer NOT NULL,
 user_id integer NOT NULL,
 title character varying NOT NULL,
 content text
);

-- CONSTRAINTS

ALTER TABLE users ADD CONSTRAINT users_pkey PRIMARY KEY (id);
ALTER TABLE posts ADD CONSTRAINT posts_pkey PRIMARY KEY (id);
ALTER TABLE posts ADD CONSTRAINT posts_user_id_fkey FOREIGN KEY (user_id) REFERENCES users(id);

-- INDEXES

CREATE INDEX idx_posts_user_id ON public.posts USING btree (user_id);
CREATE UNIQUE INDEX users_email_key ON public.users USING btree (email);

Use Cases

Code Review

Reviewers can quickly see the database structure changes:
# In a pull request
git diff main...feature/add-comments migrations/schema.sql

Documentation

The schema file serves as living documentation:
# New team member wants to understand the database
cat migrations/schema.sql

Database Recreation

Quickly recreate the database structure in a new environment:
# Instead of running all migrations
psql myapp < migrations/schema.sql

# Then mark all migrations as applied
# (This is an advanced use case - normally you'd use geni up)
Don’t Use schema.sql as a Replacement for MigrationsThe schema file is for reference and review. Always use geni up to apply migrations in production. The schema file:
  • Doesn’t track migration history
  • Doesn’t handle data migrations
  • May not perfectly recreate all database nuances

CI/CD Integration

Commit the schema file in CI/CD to catch unintended changes:
- name: Run migrations
  run: geni up

- name: Check for schema changes
  run: |
    if [[ -n $(git diff migrations/schema.sql) ]]; then
      echo "Schema file changed - commit it!"
      git diff migrations/schema.sql
      exit 1
    fi

Best Practices

Always Commit schema.sql

Treat it as part of your migration:
git add migrations/1234567890_create_users.up.sql
git add migrations/1234567890_create_users.down.sql
git add migrations/schema.sql
git commit -m "Add users table"

Add to .gitignore if Needed

If you prefer not to track the schema file:
# .gitignore
migrations/schema.sql
But we recommend tracking it for the benefits mentioned above.

Review Schema Diffs

Before merging, check the schema diff makes sense:
git diff migrations/schema.sql

Refresh Schema After Switching Branches

git checkout main
geni down -a 5  # Rollback feature branch migrations
geni up         # Re-apply main branch migrations (schema.sql updated)

Troubleshooting

Schema Dump Fails

If schema dumping fails, migrations still succeed:
[ERROR] Skipping dumping database schema: mysqldump command not found
Solutions:
  • Install the required binary (mysqldump, mariadb-dump)
  • Use the official Docker image which includes all utilities
  • Disable schema dumping: export DATABASE_NO_DUMP_SCHEMA=true

Schema File Not Updated

Check that schema dumping is enabled:
# Verify the environment variable isn't set
echo $DATABASE_NO_DUMP_SCHEMA  # Should be empty or "false"

# Run migration with schema dump
unset DATABASE_NO_DUMP_SCHEMA
geni up

Permission Errors

Ensure Geni has write access to the migrations folder:
ls -la migrations/
chmod u+w migrations/  # Grant write permission if needed
You might wonder why you need both individual migrations and a complete schema file:Migrations are for:
  • Applying incremental changes to existing databases
  • Tracking the history of database evolution
  • Rolling back specific changes
  • Data migrations and transformations
Schema file is for:
  • Quick reference to current database structure
  • Code review and documentation
  • Understanding the complete schema at a glance
  • New developer onboarding
They complement each other - migrations tell the story of how you got here, the schema file shows where you are.

Build docs developers (and LLMs) love