All articles
Best Practices/3 minutes read

Improved agent experience with llms.txt and content negotiation

January 29, 2026

PL

Peri Langlois

Head of Product Marketing

Share this article


Improved agent experience with llms.txt and content negotiation
SUMMARY

This post explains how Mintlify uses content negotiation to improve the agent experience without changing human facing documentation. By serving clean Markdown to agents, improving llms.txt placement and formatting, and advertising documentation indexes through HTTP headers, Mintlify makes docs cheaper to consume, easier to discover, and more reliable for agents.

Agents don’t browse the web the way humans do. They don’t need layout, styling, client side JavaScript, or decorative markup. For an agent, anything beyond plain Markdown is noise that consumes additional tokens and increases cost.

Content negotiation is the mechanism that lets a server return different representations of the same resource depending on who is asking. Browsers might ask for HTML. Agents can ask for Markdown. Both can be served from the same URL, without duplicating content or maintaining parallel sites.

Mintlify uses content negotiation to automatically make documentation 30x more efficient, leading to better discoverability for agents, and 30x reduction in token usage.

Turn your company’s knowledge into agent-ready context with Mintlify.

Why Markdown matters for agents

When an agent requests a documentation page, its goal is to extract meaning, not presentation. HTML responses include tags, attributes, styles, and often scripts that add no semantic value for a model. All of that still counts toward context tokens.

By serving clean Markdown when the request indicates it, Mintlify ensures that agents receive only what they need.

We improved llms.txt discoverability for coding agents at both the content and HTTP layers.

In Markdown responses, the llms.txt index instruction now appears at the top of the page instead of the bottom. It's rendered as a clear blockquote, so agents encounter guidance immediately without parsing the entire document.

At the HTTP layer, Mintlify includes Link and X-Llms-Txt headers on all raw Markdown page responses. The Link header uses standard rel semantics to advertise the llms.txt location, allowing agents to discover the documentation index directly from headers without inspecting the response body.

Together, these changes enable faster and more reliable llms.txt discovery regardless of how an agent fetches content.

A clearer llms.txt instruction block

Mintlify now prepends a dedicated llms.txt index blockquote to all Markdown pages using getLlmsTxtInstruction.

Previously, this instruction was appended to the bottom of the page. By moving it to the top, agents see guidance immediately, before the rest of the content. This is especially important for models that may truncate or summarize long documents.

The instruction appears before the page body and, when present, before any OpenAPI blocks.

No changes to titles, descriptions, or OpenAPI output

Content negotiation in Mintlify does not change how titles and descriptions are rendered. It also does not affect OpenAPI generation. Human facing documentation remains exactly the same.

The only difference is where the llms.txt instruction appears in Markdown responses and how those responses are surfaced to agents.

Headers that signal agent friendly content

Mintlify adds Link and X-Llms-Txt headers to page view responses and Markdown rewrites at the middleware level.

These headers are included in:

  • Localhost and single tenant environments
  • Requests that explicitly accept Markdown
  • General page views

This makes it easier for agents and tooling to programmatically discover llms.txt and understand that a Markdown optimized representation is available.

One URL, multiple consumers

With content negotiation, documentation authors do not need to choose between humans and agents. The same URL can serve styled HTML to browsers and clean Markdown to models.

Mintlify handles the negotiation, the headers, and the llms.txt integration so teams do not have to build or maintain separate pipelines.

The result is documentation that is cheaper for agents to consume, easier for models to understand, and unchanged for humans reading it in the browser.

Ensure AI agents can reason over accurate documentation with Mintlify.