Caching is particularly valuable for viral content. A post that is being shared thousands of times will generate many simultaneous Postcard requests. After the first analysis completes, every subsequent lookup is served from the cache in milliseconds.
Cache key
The cache key is the normalized post URL — the URL after stripping query parameters and canonicalizing vianormalizePostUrl(). This means https://x.com/user/status/123?s=20&t=abc and https://x.com/user/status/123 resolve to the same cache entry.
The normalized URL is stored in the posts table and used for all lookups:
Cache hit flow
When a completed result exists for the normalized URL andrefresh is not requested:
- The pipeline queries the
postcardsandpoststables for the normalized URL. - A completed row is found.
- The result is returned immediately — no API key needed.
- The
hitscounter on thepostcardsrow is incremented.
Cache miss flow
When no completed result exists for the URL:- A fresh
postcardsrow is created withstatus: "processing". - The full four-stage pipeline runs (scrape → corroborate → audit → score).
- Results are written to the
postcardsandpoststables. - The row’s
statusis set to"completed".
userApiKey) is required to initiate a fresh analysis. Cache hits do not require an API key.
The hits counter
Every time a cached result is served, Postcard increments the
hits column on the postcards row using an atomic SQL expression. This counter is a signal of how many times a forensic report has been accessed after its initial analysis.hits column is defined in src/db/schema.ts as an integer, defaulting to 0:
Cache duration
There is no TTL (time-to-live). Cached results persist indefinitely until a forced refresh is requested. This is a deliberate design choice: forensic results are point-in-time records. The score reflects the state of the internet at the moment of analysis.Force refresh
To re-run the full pipeline and overwrite a cached result, passrefresh: true. This bypasses the cache check and runs a fresh analysis regardless of whether a completed result exists.
Via POST body:
refresh: true is set, even if a cached result exists.
Concurrent request deduplication
If a pipeline is already running (status: "processing") for a given URL, new requests attach to the existing pipeline rather than starting a second one. The system queries for an in-flight row first:
Database-level caching vs. HTTP caching
Postcard caches at the database level, not at the HTTP layer. There are noCache-Control headers or CDN edge caches involved. The cache lives entirely in the SQLite database (local.db by default).
| Property | Database cache (Postcard) |
|---|---|
| Storage | SQLite via Drizzle ORM |
| Granularity | Per normalized URL |
| TTL | None (indefinite) |
| Invalidation | refresh: true parameter |
| Shared across instances | Only if using Turso cloud |
Self-hosting considerations
The default database is a local SQLite file (local.db). This means the cache is local to a single server instance. If you run multiple instances behind a load balancer, each instance maintains its own cache — a request that hits instance A won’t benefit from an analysis already completed on instance B.
To share the cache across instances, configure Turso cloud: