All articles
Engineering/4 minutes read

What I shipped in my first 60 days at Mintlify

November 6, 2025

DS

Dens Sumesh

Engineering

Share this article


What I shipped in my first 60 days at Mintlify

In my first 60 days at Mintlify, I focused on our fastest-growing pain points: slow analytics, vulnerable AI systems, and infrastructure that couldn't keep pace. Here's what I got to ship!

New in-house analytics platform

One of my favorite past projects was building a ClickHouse based analytics system at Trieve, where I was the first hire. When Mintlify acquired Trieve, I came over with a pretty clear idea of what needed fixing in our analytics infrastructure.

Mintlify had grown quickly enough that our analytics needs outpaced our existing systems. Fixing legacy systems is usually the enemy of growth for startups, but building something new made sense here. We particularly needed incremental materialized views. Both our customers and internal go to market team were suffering from slow queries and lack of data.

We therefore built a custom analytics platform using Kafka and ClickHouse. Kafka receives data from customer sites and ingests them into ClickHouse, an OLAP database where I can make analytical queries across millions of rows. The system now handles millions of events daily across all customer sites.

On top of the analytics platform, I set up CamelAI, an AI agent that connects directly to ClickHouse and answers questions instantly. Our support and success teams can ask things like "which customers use feature X the most this month?" and get immediate answers. This has been a huge unlock for internal teams and lets us provide faster, more actionable customer support.

Enhanced the insights section of the dashboard

Open feedback channels are great until they turn into noise. Customer feedback is valuable, but an open input invites random comments unrelated to your product, test submissions, and spam. All that stuff makes it hard to find the signal.

I built an abuse checker using Gemini Flash Lite, a small but fast LLM that classifies incoming feedback as helpful or not helpful. The model asks itself whether this is actually about the documentation, whether it provides actionable information, and whether it's spam or a test. This filters out spam, tests, and irrelevant comments before they ever reach the dashboard.

Now teams only see feedback that's actionable, feedback they can actually do things with, like updating their docs or seeing what users really think.

I also migrated the insights page itself to use our custom analytics service directly. This made it 5x faster. Response times dropped from 500-1000ms down to 100-300ms. And migrating to our own service eliminated the rate limiting issues that forced us to use "click to load" buttons for referrals and popular pages. The insights page now shows near real-time data without the caching delay.

Protected the AI assistant from junk and attacks

This problem actually reminded me of work I did at Trieve building a typo tolerance system. Both involve fast classification of user input to filter out noise, just applied differently.

Our docs assistant on customer sites is extremely powerful, but it needs guard rails. I built a filter that sits in front of the LLM and blocks irrelevant or malicious prompts before they ever reach the main model. Things like "ignore previous instructions" or "write me a poem" get stopped immediately. Questions like "how do I authenticate with the API?" pass right through.

This gives customers two big benefits. First, they no longer see useless queries in their analytics and can actually see the valuable questions that users ask. That helps them influence future product decisions and improve their documentation. Second, we stop people from getting overcharged for spam attacks against their site and causing overages on their AI system usage.

We're now blocking clearly abusive feedback with slurs and AI questions that are completely off topic from showing up in the dashboard for analytics and feedback.

Building a smarter caching system

Right now I'm working on building a caching system that removes cold starts on Mintlify doc sites entirely and makes every single page super super fast.

This works by statically caching every single page on a customer's doc site and having smart cache invalidation strategies to only invalidate the pages that need to be changed. The interesting challenge is we need to distinguish between Mintlify platform changes and customer deployments. Platform changes could represent changes for every single doc site but aren't important to get out immediately. Customer deployments need to be instantaneous.

I'm building a custom invalidation process that detects what actually changed, invalidates only the affected pages, and pre warms the cache for updated pages before users hit them. Our new caching system lets us discern between those different types of deployments.

Just getting started

In the past two months, I've worked across infrastructure, AI systems, and product features. All of it centers on making Mintlify faster, smarter, and more reliable.

If this sounds interesting, you should apply to join Mintlify and work with me!