Overview
This quickstart will walk you through the basics of Materialize using a real-time auction dataset. You’ll learn how to:- Connect to Materialize and create a schema
- Ingest streaming data from a source
- Create views to transform data
- Use indexes to maintain incrementally updated results
- Subscribe to live data changes
- Verify strong consistency guarantees
This tutorial uses sample auction data that continuously generates at 1-second intervals to simulate a real-time workload.
Prerequisites
To follow this quickstart, you’ll need:Materialize Account
Sign up for a free Materialize Cloud account at materialize.com/register.Alternatives:
- Download the Materialize Emulator for local testing
- Use your own self-managed Materialize deployment
Step 1: Open the SQL Shell
- Materialize Cloud
- Emulator
- psql Client
- Navigate to the Materialize Console
- Sign in to your account
- Click SQL Shell in the left-hand menu
quickstart cluster in the materialize.public namespace.Step 2: Create a Schema
Create a dedicated schema for this quickstart to keep your work organized.Namespace in Materialize:
- Database:
materialize(default) - Schema:
auction_demo(just created) - Cluster:
quickstart(default compute cluster)
Step 3: Create a Source
Sources are external systems from which Materialize reads data. For this tutorial, we’ll use Materialize’s built-in auction data generator.- Generates auction data every 1 second
- Creates multiple subsources (tables) for different entities
- Starts from auction ID 100000
View the Created Sources
Check what was created:Explore the Data
Let’s look at the structure of the data:Sample Output
Sample Output
Auctions:Bids:
Step 4: Create a View to Find Winning Bids
A view saves a query under a name. Let’s create a view to find winning bids for completed auctions.This view uses Materialize’s idiomatic
DISTINCT ON clause for Top-K queries, which efficiently finds the highest bid per auction.Query the View
Now query the view to see winning bids:Step 5: Create an Index for Fast Queries
Indexes store query results in memory and incrementally update them as new data arrives.- The
winning_bidsquery is executed - Results are stored in memory in the cluster
- As new auctions/bids arrive, results are incrementally updated
- Queries read from memory — making them fast and computationally free
Test the Performance
Run the same queries again:The index also enables efficient point lookups and delta joins on the
item column.Step 6: Find Auction Flippers in Real-Time
Let’s build something more complex: detecting “auction flippers” — users who buy items and resell them at a higher price within 8 days.Create Flip Activities View
Create a Table for Known Flippers
Sometimes you have external data about known flippers. Let’s create a table for that:Combine Into a Flippers View
Create a view that flags users as flippers if they either:- Have 2+ flip activities, OR
- Are in the
known_flipperstable
Step 7: Subscribe to Live Changes
Materialize’sSUBSCRIBE command lets you see results as they change in real-time.
snapshot = false option means you’ll only see new flippers that appear after starting the subscription.
Add a Manual Flipper
In another session (or stop the subscription), insert a known flipper:You’ll also see flippers detected from auction activity. Due to the randomness of generated data, this might take a moment.
Ctrl+C.
Step 8: Verify Strong Consistency
Let’s verify that Materialize maintains consistency even with continuous data changes. Create views to track credits (sellers) and debits (buyers):Watch Consistency in Real-Time
Subscribe to see the totals change:total_creditsandtotal_debitswill increasedifferencewill always remain 0 — proving strong consistency
This demonstrates Materialize’s strict serializability guarantee: credits and debits are always consistent, even as thousands of auctions and bids stream through the system.
Step 9: Clean Up
When you’re done with the quickstart:Summary
In this quickstart, you learned:Sources
How to ingest streaming data from external systems
Views
How to transform data using SQL views
Indexes
How indexes provide fast, always-fresh query results through incremental updates
Consistency
How Materialize guarantees strong consistency across all queries
Key Takeaways
- Incremental Updates: Indexes and materialized views incrementally update results instead of recomputing from scratch
- Computationally Free Reads: Because work is performed on writes, reads from indexed views are instant and free
- Strong Consistency: Materialize provides strict serializability — results are always correct and consistent
- Real-Time Subscriptions: Use
SUBSCRIBEto watch data change as it happens
Indexed Views vs. Materialized Views
When should you use each?| Use Case | Recommended Approach |
|---|---|
| Results needed only in one cluster | Indexed view |
| Results shared across multiple clusters | Materialized view |
Final output for SUBSCRIBE consumers | Materialized view |
| Fast point lookups within a cluster | Indexed view |
| Intermediate transformations | View (no index or materialization) |
Next Steps
Key Concepts
Deep dive into sources, views, indexes, clusters, and sinks
Connect Your Data
Learn how to ingest data from PostgreSQL, MySQL, Kafka, and more
Optimization Guide
Learn best practices for optimizing query performance
dbt Integration
Use dbt Core to manage your Materialize transformations
Get Help
Documentation
Complete reference documentation
Community Slack
Ask questions and get help
Schedule Demo
Get personalized guidance