# Cerebras Inference

[inference-docs.cerebras.ai](https://inference-docs.cerebras.ai)

- **Overall score:** 87/100 (Grade B)
- **Checks passed:** 22 / 29
- **Last computed:** 2026-04-28

## Components

### Content Discoverability

- **Score:** 81/100 · **Status:** fail
- **Summary:** 1 failed and 1 warning across 6 AFDocs checks.
- **Rationale:** Agents need a clear entry point and crawl map before they can reliably discover the right pages.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ✅ **LLMS TXT Exists** — llms.txt found at 1 location(s)
- ⚠️ **LLMS TXT Valid** — llms.txt contains parseable links but doesn't fully follow the proposed structure: https://inference-docs.cerebras.ai/llms.txt: No blockquote summary found Your llms.txt contains parseable links but doesn't follow the standard structure. Add an H1 title as the first line and a blockquote summary (lines starting with >) to improve agent parsing.
- ✅ **LLMS TXT Size** — llms.txt is 12,721 characters (under 50,000 threshold)
- ✅ **LLMS TXT Links Resolve** — All 15 same-origin sampled links resolve (79 total links)
- ✅ **LLMS TXT Links Markdown** — 15/15 same-origin sampled links point to markdown content (100%) (2 external links excluded)
- ❌ **LLMS TXT Directive** — No llms.txt directive found in any of 15 sampled pages No agent-facing directive pointing to llms.txt was detected on any tested page. Add a blockquote near the top of each page (e.g., "> For the complete documentation index, see [llms.txt](/llms.txt)"). This can be visually hidden with CSS while remaining accessible to agents.

### Markdown Availability

- **Score:** 100/100 · **Status:** pass
- **Summary:** 2 AFDocs checks pass.
- **Rationale:** When markdown is available directly, agents spend less effort stripping presentation markup and guessing structure.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ✅ **Markdown Url Support** — 15/15 sampled pages support .md URLs (100%)
- ✅ **Content Negotiation** — 15/15 sampled pages support content negotiation (100%)

### Page Size and Truncation Risk

- **Score:** 75/100 · **Status:** fail
- **Summary:** 1 failed and 1 warning across 4 AFDocs checks.
- **Rationale:** Large pages and delayed primary content increase truncation risk and make retrieval less reliable.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ✅ **Rendering Strategy** — All 15 sampled pages contain server-rendered content
- ✅ **Page Size Markdown** — All 15 pages under 50K chars (median 5K, max 34K)
- ❌ **Page Size Html** — 15 of 15 sampled pages convert to over 100K chars (max 805K, 32% boilerplate) 15 of 15 pages convert to over 100K characters of markdown. Reduce inline CSS/JS, break large pages, or provide markdown versions as a smaller alternative.
- ⚠️ **Content Start Position** — 1 of 15 sampled pages have content starting at 10–50% (worst 11%) 1 of 15 pages have documentation content starting 10-50% into the converted output. Inline CSS or boilerplate consumes part of the agent's truncation budget before content begins.

### Content Structure

- **Score:** 100/100 · **Status:** partial
- **Summary:** 1 skipped across 3 AFDocs checks.
- **Rationale:** Predictable sections, valid code fences, and serialized tabs make the content easier for agents to parse correctly.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ✅ **Tabbed Content Serialization** — 34 tab group(s) across 10 of 15 sampled pages; all serialize under 50K chars
- ⏭️ **Section Header Quality** — 10 page(s) with tabs found, but no section headers inside tab panels to evaluate
- ✅ **Markdown Code Fence Validity** — All 63 code fences properly closed across 16 pages

### URL Stability and Redirects

- **Score:** 100/100 · **Status:** pass
- **Summary:** 2 AFDocs checks pass.
- **Rationale:** Stable URLs and sane redirect behavior prevent retrieval drift and broken tool references.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ✅ **Http Status Codes** — All 15 sampled pages return proper error codes for bad URLs
- ✅ **Redirect Behavior** — No redirects detected across 15 sampled pages

### Observability and Content Health

- **Score:** 96/100 · **Status:** partial
- **Summary:** 1 warning across 3 AFDocs checks.
- **Rationale:** Coverage, parity, and cache behavior determine whether agents can trust the content they retrieve.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ✅ **LLMS TXT Freshness** — llms.txt covers 100% of 71 sitemap doc pages; 6 llms.txt links not in sitemap (may indicate stale links or incomplete sitemap)
- ⚠️ **Markdown Content Parity** — 6 of 15 pages have minor content differences between markdown and HTML 6 pages have minor content differences between their markdown and HTML versions. Review for formatting variations.
- ✅ **Cache Header Hygiene** — All 16 endpoints have appropriate cache headers

### Authentication and Access

- **Score:** 100/100 · **Status:** partial
- **Summary:** 1 skipped across 2 AFDocs checks.
- **Rationale:** Agents need either public access or a clear alternative path when documentation is gated behind auth.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ✅ **Auth Gate Detection** — All 15 sampled pages are publicly accessible
- ⏭️ **Auth Alternative Access** — All docs pages are publicly accessible; no alternative access paths needed

### Full Content Discoverability

- **Score:** 100/100 · **Status:** pass
- **Summary:** llms-full.txt passes all checks.
- **Rationale:** A full-document snapshot gives long-context agents a single canonical corpus to ingest without repeated crawling.
- **Reference:** [llms-full.txt guide](https://www.mintlify.com/docs/ai/llmstxt#llms-full-txt)

**Checks**

- ✅ **LLMS Full Exists** — Found llms-full.txt.
- ✅ **LLMS Full Size** — llms-full.txt size is within the expected range.
- ✅ **LLMS Full Valid** — llms-full.txt has a recognizable markdown structure.
- ✅ **LLMS Full Links Resolve** — llms-full.txt links resolve successfully.

### Agent Skills

- **Score:** 100/100 · **Status:** pass
- **Summary:** skill.md passes all checks.
- **Rationale:** Agent skills provide product-specific operating guidance that plain documentation pages do not encode on their own.
- **Reference:** [skill.md guide](https://www.mintlify.com/docs/ai/skillmd)

**Checks**

- ✅ **Skill MD** — Found an agent skill definition.

### MCP Server

- **Score:** 100/100 · **Status:** pass
- **Summary:** MCP passes all checks.
- **Rationale:** A discoverable MCP server lets agents use first-class tools instead of scraping pages and inferring behavior.
- **Reference:** [MCP guide](https://www.mintlify.com/docs/ai/model-context-protocol)

**Checks**

- ✅ **MCP Server Discoverable** — Found an MCP server.
- ✅ **MCP Tool Count** — The MCP server exposes tools.
