# OpenAI

[developers.openai.com/api/docs](https://developers.openai.com/api/docs)

- **Overall score:** 73/100 (Grade C)
- **Checks passed:** 17 / 29
- **Last computed:** 2026-05-12

## Components

### Content Discoverability

- **Score:** 79/100 · **Status:** fail
- **Summary:** 1 failed and 1 warning across 6 AFDocs checks.
- **Rationale:** Agents need a clear entry point and crawl map before they can reliably discover the right pages.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ✅ **LLMS TXT Exists** — llms.txt found at 3 location(s)
- ✅ **LLMS TXT Valid** — llms.txt follows the proposed structure (H1, blockquote, heading-delimited link sections)
- ⚠️ **LLMS TXT Size** — llms.txt is 82,600 characters (between 50,000 and 100,000; consider splitting) Your llms.txt is 82,600 characters, which may be truncated on some agent platforms. If it grows further, split into nested llms.txt files with a root index under 50,000 characters.
- ✅ **LLMS TXT Links Resolve** — All 15 same-origin sampled links resolve (510 total links)
- ✅ **LLMS TXT Links Markdown** — 15/15 same-origin sampled links point to markdown content (100%)
- ❌ **LLMS TXT Directive** — No llms.txt directive found in any of 14 sampled pages; 1 failed to fetch No agent-facing directive pointing to llms.txt was detected on any tested page. Add a blockquote near the top of each page (e.g., "> For the complete documentation index, see [llms.txt](/llms.txt)"). This can be visually hidden with CSS while remaining accessible to agents.

### Markdown Availability

- **Score:** 17/100 · **Status:** fail
- **Summary:** 1 failed and 1 warning across 2 AFDocs checks.
- **Rationale:** When markdown is available directly, agents spend less effort stripping presentation markup and guessing structure.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ⚠️ **Markdown Url Support** — 4/15 sampled pages support .md URLs (27%); inconsistent support 0 of 15 pages support .md URLs inconsistently. Ensure all documentation pages serve markdown when .md is appended to the URL.
- ❌ **Content Negotiation** — Server ignores Accept: text/markdown header (0/15 sampled pages return markdown) Your server ignores Accept: text/markdown and returns HTML. Some agents (Claude Code, Cursor, OpenCode) request markdown this way. Configure your server to honor content negotiation.

### Page Size and Truncation Risk

- **Score:** 70/100 · **Status:** fail
- **Summary:** 1 failed and 2 warnings across 4 AFDocs checks.
- **Rationale:** Large pages and delayed primary content increase truncation risk and make retrieval less reliable.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ✅ **Rendering Strategy** — All 15 sampled pages contain server-rendered content
- ⚠️ **Page Size Markdown** — 1 of 4 pages between 50K–100K chars (max 53K) 1 of 4 markdown pages are between 50K and 100K characters. These may be truncated on some agent platforms or routed through summarization. Consider splitting large pages.
- ❌ **Page Size Html** — 8 of 15 sampled pages convert to over 100K chars (max 1044K, 79% boilerplate) 8 of 15 pages convert to over 100K characters of markdown. Reduce inline CSS/JS, break large pages, or provide markdown versions as a smaller alternative.
- ⚠️ **Content Start Position** — 13 of 15 sampled pages have content starting at 10–50% (worst 25%) 13 of 15 pages have documentation content starting 10-50% into the converted output. Inline CSS or boilerplate consumes part of the agent's truncation budget before content begins.

### Content Structure

- **Score:** 100/100 · **Status:** pass
- **Summary:** 3 AFDocs checks pass.
- **Rationale:** Predictable sections, valid code fences, and serialized tabs make the content easier for agents to parse correctly.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ✅ **Tabbed Content Serialization** — No tabbed content detected across 15 sampled pages
- ✅ **Section Header Quality** — No tabbed content found; header quality check not applicable
- ✅ **Markdown Code Fence Validity** — All 33 code fences properly closed across 7 pages

### URL Stability and Redirects

- **Score:** 70/100 · **Status:** fail
- **Summary:** 1 failed across 2 AFDocs checks.
- **Rationale:** Stable URLs and sane redirect behavior prevent retrieval drift and broken tool references.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ❌ **Http Status Codes** — 7 of 15 sampled pages return 200 for non-existent URLs (soft 404) Your site returns 200 for non-existent pages (soft 404). Agents try to extract information from the error page content instead of recognizing the page is missing. Configure your server to return 404 for pages that don't exist.
- ✅ **Redirect Behavior** — All 9 redirect(s) across 15 sampled pages are same-host HTTP redirects

### Observability and Content Health

- **Score:** 84/100 · **Status:** fail
- **Summary:** 1 failed and 1 skipped across 3 AFDocs checks.
- **Rationale:** Coverage, parity, and cache behavior determine whether agents can trust the content they retrieve.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ⏭️ **LLMS TXT Freshness** — No sitemap found; cannot assess llms.txt freshness without a sitemap as ground truth
- ❌ **Markdown Content Parity** — 1 of 4 pages have substantive content differences between markdown and HTML (avg 12% missing) 1 pages have substantive content differences between markdown and HTML (avg 12% missing). Agents receiving the markdown version are getting outdated or incomplete content. Regenerate markdown from source or fix the build pipeline.
- ✅ **Cache Header Hygiene** — All 18 endpoints have appropriate cache headers

### Authentication and Access

- **Score:** 100/100 · **Status:** partial
- **Summary:** 1 skipped across 2 AFDocs checks.
- **Rationale:** Agents need either public access or a clear alternative path when documentation is gated behind auth.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ✅ **Auth Gate Detection** — All 15 sampled pages are publicly accessible
- ⏭️ **Auth Alternative Access** — All docs pages are publicly accessible; no alternative access paths needed

### Full Content Discoverability

- **Score:** 100/100 · **Status:** pass
- **Summary:** llms-full.txt passes all checks.
- **Rationale:** A full-document snapshot gives long-context agents a single canonical corpus to ingest without repeated crawling.
- **Reference:** [llms-full.txt guide](https://www.mintlify.com/docs/ai/llmstxt#llms-full-txt)

**Checks**

- ✅ **LLMS Full Exists** — Found llms-full.txt.
- ✅ **LLMS Full Size** — llms-full.txt size is within the expected range.
- ✅ **LLMS Full Valid** — llms-full.txt has a recognizable markdown structure.
- ✅ **LLMS Full Links Resolve** — llms-full.txt links resolve successfully.

### Agent Skills

- **Score:** 0/100 · **Status:** fail
- **Summary:** skill.md has 1 failing check.
- **Rationale:** Agent skills provide product-specific operating guidance that plain documentation pages do not encode on their own.
- **Reference:** [skill.md guide](https://www.mintlify.com/docs/ai/skillmd)

**Checks**

- ❌ **Skill MD** — No agent skill definition was discovered.

### MCP Server

- **Score:** 100/100 · **Status:** pass
- **Summary:** MCP passes all checks.
- **Rationale:** A discoverable MCP server lets agents use first-class tools instead of scraping pages and inferring behavior.
- **Reference:** [MCP guide](https://www.mintlify.com/docs/ai/model-context-protocol)

**Checks**

- ✅ **MCP Server Discoverable** — Found an MCP server.
- ✅ **MCP Tool Count** — The MCP server exposes tools.
