Skip to main content

Overview

This quickstart will walk you through the basics of Materialize using a real-time auction dataset. You’ll learn how to:
  • Connect to Materialize and create a schema
  • Ingest streaming data from a source
  • Create views to transform data
  • Use indexes to maintain incrementally updated results
  • Subscribe to live data changes
  • Verify strong consistency guarantees
This tutorial uses sample auction data that continuously generates at 1-second intervals to simulate a real-time workload.

Prerequisites

To follow this quickstart, you’ll need:
1

Materialize Account

Sign up for a free Materialize Cloud account at materialize.com/register.Alternatives:
2

SQL Client

Access the SQL Shell in the Materialize Console (web UI), or use any PostgreSQL-compatible client like psql.

Step 1: Open the SQL Shell

  1. Navigate to the Materialize Console
  2. Sign in to your account
  3. Click SQL Shell in the left-hand menu
By default, you’re connected to the quickstart cluster in the materialize.public namespace.

Step 2: Create a Schema

Create a dedicated schema for this quickstart to keep your work organized.
CREATE SCHEMA auction_demo;
Switch to the new schema:
SET search_path = auction_demo;
Namespace in Materialize:
  • Database: materialize (default)
  • Schema: auction_demo (just created)
  • Cluster: quickstart (default compute cluster)

Step 3: Create a Source

Sources are external systems from which Materialize reads data. For this tutorial, we’ll use Materialize’s built-in auction data generator.
CREATE SOURCE auction_house
  FROM LOAD GENERATOR AUCTION
  (TICK INTERVAL '1s', AS OF 100000)
  FOR ALL TABLES;
This creates a source that:
  • Generates auction data every 1 second
  • Creates multiple subsources (tables) for different entities
  • Starts from auction ID 100000

View the Created Sources

Check what was created:
SHOW SOURCES;
You should see:
name                   | type           | cluster    
---------------------- | -------------- | ----------
accounts               | subsource      | quickstart
auction_house          | load-generator | quickstart
auction_house_progress | progress       | null
auctions               | subsource      | quickstart
bids                   | subsource      | quickstart
organizations          | subsource      | quickstart
users                  | subsource      | quickstart

Explore the Data

Let’s look at the structure of the data:
SELECT * FROM auctions LIMIT 3;
Auctions:
id    | seller | item               | end_time
------+--------+--------------------+---------------------------
29550 | 2468   | Best Pizza in Town | 2024-07-25 18:24:25.805+00
29551 | 1821   | Vintage Record     | 2024-07-25 18:25:10.432+00
29552 | 3102   | Signed Memorabilia | 2024-07-25 18:26:05.198+00
Bids:
id     | buyer | auction_id | amount | bid_time
-------+-------+------------+--------+---------------------------
295641 | 737   | 29564      | 72     | 2024-07-25 18:25:42.911+00
295642 | 1203  | 29564      | 85     | 2024-07-25 18:25:48.324+00
295643 | 451   | 29565      | 42     | 2024-07-25 18:25:50.667+00

Step 4: Create a View to Find Winning Bids

A view saves a query under a name. Let’s create a view to find winning bids for completed auctions.
CREATE VIEW winning_bids AS
SELECT DISTINCT ON (a.id) 
  b.*,
  a.item,
  a.seller
FROM auctions AS a
JOIN bids AS b ON a.id = b.auction_id
WHERE 
  b.bid_time < a.end_time          -- Bid placed before auction ended
  AND mz_now() >= a.end_time        -- Auction has completed
ORDER BY 
  a.id,
  b.amount DESC,                    -- Highest bid wins
  b.bid_time,
  b.buyer;
This view uses Materialize’s idiomatic DISTINCT ON clause for Top-K queries, which efficiently finds the highest bid per auction.

Query the View

Now query the view to see winning bids:
SELECT * FROM winning_bids
ORDER BY bid_time DESC
LIMIT 10;
Filter by a specific item:
SELECT * FROM winning_bids
WHERE item = 'Best Pizza in Town'
ORDER BY bid_time DESC
LIMIT 10;
Each time you query the view, Materialize recomputes the results from scratch. As data grows, this becomes slower. Let’s fix that with an index.

Step 5: Create an Index for Fast Queries

Indexes store query results in memory and incrementally update them as new data arrives.
CREATE INDEX wins_by_item ON winning_bids (item);
What happens when you create this index:
  1. The winning_bids query is executed
  2. Results are stored in memory in the cluster
  3. As new auctions/bids arrive, results are incrementally updated
  4. Queries read from memory — making them fast and computationally free

Test the Performance

Run the same queries again:
SELECT * FROM winning_bids
WHERE item = 'Best Pizza in Town'
ORDER BY bid_time DESC
LIMIT 10;
Notice the query is faster because it reads from the index in memory instead of recomputing results.
The index also enables efficient point lookups and delta joins on the item column.

Step 6: Find Auction Flippers in Real-Time

Let’s build something more complex: detecting “auction flippers” — users who buy items and resell them at a higher price within 8 days.

Create Flip Activities View

CREATE VIEW flip_activities AS
SELECT 
  w2.seller AS flipper_id,
  w2.item,
  w2.amount AS sold_amount,
  w1.amount AS purchased_amount,
  w2.amount - w1.amount AS profit,
  datediff('days', w2.bid_time, w1.bid_time) AS days_between
FROM winning_bids AS w1
JOIN winning_bids AS w2
  ON w1.buyer = w2.seller          -- Same person bought and sold
  AND w1.item = w2.item            -- Same item
WHERE 
  w2.amount > w1.amount            -- Sold at higher price
  AND datediff('days', w2.bid_time, w1.bid_time) < 8;  -- Within 8 days
View flip activities:
SELECT * FROM flip_activities
ORDER BY profit DESC
LIMIT 10;

Create a Table for Known Flippers

Sometimes you have external data about known flippers. Let’s create a table for that:
CREATE TABLE known_flippers (
  flipper_id BIGINT
);

Combine Into a Flippers View

Create a view that flags users as flippers if they either:
  • Have 2+ flip activities, OR
  • Are in the known_flippers table
CREATE VIEW flippers AS
SELECT flipper_id
FROM (
  -- Users with multiple flip activities
  SELECT flipper_id
  FROM flip_activities
  GROUP BY flipper_id
  HAVING COUNT(*) >= 2
  
  UNION ALL
  
  -- Manually added known flippers
  SELECT flipper_id
  FROM known_flippers
);

Step 7: Subscribe to Live Changes

Materialize’s SUBSCRIBE command lets you see results as they change in real-time.
SUBSCRIBE TO (
  SELECT * FROM flippers
) WITH (snapshot = false);
The snapshot = false option means you’ll only see new flippers that appear after starting the subscription.

Add a Manual Flipper

In another session (or stop the subscription), insert a known flipper:
INSERT INTO known_flippers VALUES (450);
The subscription immediately shows this new flipper.
You’ll also see flippers detected from auction activity. Due to the randomness of generated data, this might take a moment.
Stop the subscription by clicking Stop streaming or pressing Ctrl+C.

Step 8: Verify Strong Consistency

Let’s verify that Materialize maintains consistency even with continuous data changes. Create views to track credits (sellers) and debits (buyers):
-- Track seller credits
CREATE VIEW seller_credits AS
SELECT 
  seller,
  SUM(amount) AS credits
FROM winning_bids
GROUP BY seller;

-- Track buyer debits
CREATE VIEW buyer_debits AS
SELECT 
  buyer,
  SUM(amount) AS debits
FROM winning_bids
GROUP BY buyer;

-- Verify totals match (should always be 0 difference)
CREATE VIEW funds_movement AS
SELECT 
  SUM(credits) AS total_credits,
  SUM(debits) AS total_debits,
  SUM(credits) - SUM(debits) AS difference
FROM (
  SELECT SUM(credits) AS credits, 0 AS debits
  FROM seller_credits
  
  UNION
  
  SELECT 0 AS credits, SUM(debits) AS debits
  FROM buyer_debits
);

Watch Consistency in Real-Time

Subscribe to see the totals change:
SUBSCRIBE TO (
  SELECT * FROM funds_movement
);
Toggle Show diffs in the console to see changes. As new auctions complete:
  • total_credits and total_debits will increase
  • difference will always remain 0 — proving strong consistency
This demonstrates Materialize’s strict serializability guarantee: credits and debits are always consistent, even as thousands of auctions and bids stream through the system.

Step 9: Clean Up

When you’re done with the quickstart:
-- Drop the source and all dependent objects
DROP SOURCE auction_house CASCADE;

-- Drop the table
DROP TABLE known_flippers;

-- Drop the schema
DROP SCHEMA auction_demo CASCADE;

Summary

In this quickstart, you learned:

Sources

How to ingest streaming data from external systems

Views

How to transform data using SQL views

Indexes

How indexes provide fast, always-fresh query results through incremental updates

Consistency

How Materialize guarantees strong consistency across all queries

Key Takeaways

  1. Incremental Updates: Indexes and materialized views incrementally update results instead of recomputing from scratch
  2. Computationally Free Reads: Because work is performed on writes, reads from indexed views are instant and free
  3. Strong Consistency: Materialize provides strict serializability — results are always correct and consistent
  4. Real-Time Subscriptions: Use SUBSCRIBE to watch data change as it happens

Indexed Views vs. Materialized Views

When should you use each?
Use CaseRecommended Approach
Results needed only in one clusterIndexed view
Results shared across multiple clustersMaterialized view
Final output for SUBSCRIBE consumersMaterialized view
Fast point lookups within a clusterIndexed view
Intermediate transformationsView (no index or materialization)

Next Steps

Key Concepts

Deep dive into sources, views, indexes, clusters, and sinks

Connect Your Data

Learn how to ingest data from PostgreSQL, MySQL, Kafka, and more

Optimization Guide

Learn best practices for optimizing query performance

dbt Integration

Use dbt Core to manage your Materialize transformations

Get Help

Documentation

Complete reference documentation

Community Slack

Ask questions and get help

Schedule Demo

Get personalized guidance

Build docs developers (and LLMs) love