# Redis

[redis.io/docs](https://redis.io/docs)

- **Overall score:** 73/100 (Grade C)
- **Checks passed:** 9 / 29
- **Last computed:** 2026-05-11

## Components

### Content Discoverability

- **Score:** 79/100 · **Status:** fail
- **Summary:** 2 failed across 6 AFDocs checks.
- **Rationale:** Agents need a clear entry point and crawl map before they can reliably discover the right pages.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ✅ **LLMS TXT Exists** — llms.txt found at 1 location(s)
- ✅ **LLMS TXT Valid** — llms.txt follows the proposed structure (H1, blockquote, heading-delimited link sections)
- ✅ **LLMS TXT Size** — llms.txt is 46,830 characters (under 50,000 threshold)
- ❌ **LLMS TXT Links Resolve** — Only 12/15 same-origin sampled links resolve (80%); 3 broken 3 of 15 links in your llms.txt return errors. A stale llms.txt with broken links is worse than no llms.txt at all because it sends agents down dead ends with high confidence.
- ✅ **LLMS TXT Links Markdown** — 14/15 same-origin sampled links point to markdown content (93%)
- ❌ **LLMS TXT Directive** — No llms.txt directive found in any of 15 sampled pages No agent-facing directive pointing to llms.txt was detected on any tested page. Add a blockquote near the top of each page (e.g., "> For the complete documentation index, see [llms.txt](/llms.txt)"). This can be visually hidden with CSS while remaining accessible to agents.

### Markdown Availability

- **Score:** 0/100 · **Status:** fail
- **Summary:** 2 failed across 2 AFDocs checks.
- **Rationale:** When markdown is available directly, agents spend less effort stripping presentation markup and guessing structure.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ❌ **Markdown Url Support** — No sampled pages support .md URLs (0/15 tested) Your pages don't return markdown when .md is appended to the URL. Configure your docs platform to serve .md variants for all documentation pages.
- ❌ **Content Negotiation** — Server ignores Accept: text/markdown header (0/15 sampled pages return markdown) Your server ignores Accept: text/markdown and returns HTML. Some agents (Claude Code, Cursor, OpenCode) request markdown this way. Configure your server to honor content negotiation.

### Page Size and Truncation Risk

- **Score:** 82/100 · **Status:** fail
- **Summary:** 2 failed and 1 skipped across 4 AFDocs checks.
- **Rationale:** Large pages and delayed primary content increase truncation risk and make retrieval less reliable.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ✅ **Rendering Strategy** — All 15 sampled pages contain server-rendered content
- ⏭️ **Page Size Markdown** — Skipped: dependency check did not pass
- ❌ **Page Size Html** — 2 of 15 sampled pages convert to over 100K chars (max 177K, 69% boilerplate) 2 of 15 pages convert to over 100K characters of markdown. Reduce inline CSS/JS, break large pages, or provide markdown versions as a smaller alternative.
- ❌ **Content Start Position** — 2 of 15 sampled pages have content starting past 50% (worst 102%) 2 of 15 pages have content starting past 50% of the converted output. Agents may never see the documentation content. Move or remove inline CSS/JS that precedes the content area.

### Content Structure

- **Score:** 100/100 · **Status:** partial
- **Summary:** 1 skipped across 3 AFDocs checks.
- **Rationale:** Predictable sections, valid code fences, and serialized tabs make the content easier for agents to parse correctly.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ✅ **Tabbed Content Serialization** — 4 tab group(s) across 2 of 15 sampled pages; all serialize under 50K chars
- ⏭️ **Section Header Quality** — 2 page(s) with tabs found, but no section headers inside tab panels to evaluate
- ✅ **Markdown Code Fence Validity** — All 0 code fences properly closed across 1 pages

### URL Stability and Redirects

- **Score:** 95/100 · **Status:** fail
- **Summary:** 1 failed across 2 AFDocs checks.
- **Rationale:** Stable URLs and sane redirect behavior prevent retrieval drift and broken tool references.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ✅ **Http Status Codes** — All 15 sampled pages return proper error codes for bad URLs
- ❌ **Redirect Behavior** — 2 JavaScript redirect(s) detected across 15 sampled pages JavaScript-based redirects detected on 2 pages. Agents don't execute JavaScript and will not follow these redirects. Use HTTP 301/302 redirects instead.

### Observability and Content Health

- **Score:** 18/100 · **Status:** fail
- **Summary:** 1 failed, 1 warning, and 1 skipped across 3 AFDocs checks.
- **Rationale:** Coverage, parity, and cache behavior determine whether agents can trust the content they retrieve.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ❌ **LLMS TXT Freshness** — llms.txt covers 51/2388 sitemap doc pages (2%); 2337 missing; 6 llms.txt links not in sitemap (may indicate stale links or incomplete sitemap) Your llms.txt covers less than 80% of your site's pages. 2337 live pages are missing from the index. Regenerate llms.txt from your sitemap or build pipeline.
- ⏭️ **Markdown Content Parity** — Skipped: dependency check did not pass
- ⚠️ **Cache Header Hygiene** — 16 of 16 endpoints have moderate cache lifetimes (1–24 hours) 16 endpoints have moderate cache lifetimes (1-24 hours). Updates to llms.txt or markdown content may take hours to propagate.

### Authentication and Access

- **Score:** 100/100 · **Status:** partial
- **Summary:** 1 skipped across 2 AFDocs checks.
- **Rationale:** Agents need either public access or a clear alternative path when documentation is gated behind auth.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ✅ **Auth Gate Detection** — All 15 sampled pages are publicly accessible
- ⏭️ **Auth Alternative Access** — All docs pages are publicly accessible; no alternative access paths needed

### Full Content Discoverability

- **Score:** 0/100 · **Status:** fail
- **Summary:** llms-full.txt has 1 failing check.
- **Rationale:** A full-document snapshot gives long-context agents a single canonical corpus to ingest without repeated crawling.
- **Reference:** [llms-full.txt guide](https://www.mintlify.com/docs/ai/llmstxt#llms-full-txt)

**Checks**

- ❌ **LLMS Full Exists** — No llms-full.txt file was discovered.
- ⏭️ **LLMS Full Size** — Skipped because llms-full.txt was not found.
- ⏭️ **LLMS Full Valid** — Skipped because llms-full.txt was not found.
- ⏭️ **LLMS Full Links Resolve** — Skipped because llms-full.txt was not found.

### Agent Skills

- **Score:** 0/100 · **Status:** fail
- **Summary:** skill.md has 1 failing check.
- **Rationale:** Agent skills provide product-specific operating guidance that plain documentation pages do not encode on their own.
- **Reference:** [skill.md guide](https://www.mintlify.com/docs/ai/skillmd)

**Checks**

- ❌ **Skill MD** — No agent skill definition was discovered.

### MCP Server

- **Score:** 0/100 · **Status:** fail
- **Summary:** MCP has 1 failing check.
- **Rationale:** A discoverable MCP server lets agents use first-class tools instead of scraping pages and inferring behavior.
- **Reference:** [MCP guide](https://www.mintlify.com/docs/ai/model-context-protocol)

**Checks**

- ❌ **MCP Server Discoverable** — No MCP server was discovered at the expected endpoints.
- ⏭️ **MCP Tool Count** — Skipped because the MCP server was not discoverable.
