# Yamada Ui

[yamada-ui.com](https://yamada-ui.com)

- **Overall score:** 79/100 (Grade C)
- **Checks passed:** 14 / 29
- **Last computed:** 2026-05-11

## Components

### Content Discoverability

- **Score:** 84/100 · **Status:** partial
- **Summary:** 1 warning across 6 AFDocs checks.
- **Rationale:** Agents need a clear entry point and crawl map before they can reliably discover the right pages.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ✅ **LLMS TXT Exists** — llms.txt found at 1 location(s)
- ✅ **LLMS TXT Valid** — llms.txt follows the proposed structure (H1, blockquote, heading-delimited link sections)
- ✅ **LLMS TXT Size** — llms.txt is 38,084 characters (under 50,000 threshold)
- ✅ **LLMS TXT Links Resolve** — All 15 same-origin sampled links resolve (256 total links)
- ✅ **LLMS TXT Links Markdown** — 15/15 same-origin sampled links point to markdown content (100%)
- ⚠️ **LLMS TXT Directive** — llms.txt directive found in 1 of 15 sampled pages (14 missing) An llms.txt directive was found on some pages but is missing from others, or is buried deep in the page. Ensure the directive appears near the top of every documentation page.

### Markdown Availability

- **Score:** 59/100 · **Status:** fail
- **Summary:** 1 failed across 2 AFDocs checks.
- **Rationale:** When markdown is available directly, agents spend less effort stripping presentation markup and guessing structure.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ✅ **Markdown Url Support** — 14/15 sampled pages support .md URLs (93%)
- ❌ **Content Negotiation** — Server ignores Accept: text/markdown header (0/15 sampled pages return markdown) Your server ignores Accept: text/markdown and returns HTML. Some agents (Claude Code, Cursor, OpenCode) request markdown this way. Configure your server to honor content negotiation.

### Page Size and Truncation Risk

- **Score:** 62/100 · **Status:** fail
- **Summary:** 1 failed and 2 warnings across 4 AFDocs checks.
- **Rationale:** Large pages and delayed primary content increase truncation risk and make retrieval less reliable.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ✅ **Rendering Strategy** — All 15 sampled pages contain server-rendered content
- ⚠️ **Page Size Markdown** — 2 of 14 pages between 50K–100K chars (max 66K) 2 of 14 markdown pages are between 50K and 100K characters. These may be truncated on some agent platforms or routed through summarization. Consider splitting large pages.
- ❌ **Page Size Html** — 15 of 15 sampled pages convert to over 100K chars (max 956K, 16% boilerplate) 15 of 15 pages convert to over 100K characters of markdown. Reduce inline CSS/JS, break large pages, or provide markdown versions as a smaller alternative.
- ⚠️ **Content Start Position** — 15 of 15 sampled pages have content starting at 10–50% (worst 42%) 15 of 15 pages have documentation content starting 10-50% into the converted output. Inline CSS or boilerplate consumes part of the agent's truncation budget before content begins.

### Content Structure

- **Score:** 93/100 · **Status:** fail
- **Summary:** 1 failed and 1 warning across 3 AFDocs checks.
- **Rationale:** Predictable sections, valid code fences, and serialized tabs make the content easier for agents to parse correctly.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ⚠️ **Tabbed Content Serialization** — 114 tab group(s) found; worst page serializes to 85K chars (50K–100K) Tabbed content on 3 pages serializes to 50K-100K characters. Consider breaking tab variants into separate pages or providing a mechanism for agents to request specific variants.
- ❌ **Section Header Quality** — 1 of 1 page(s) with tab headers don't distinguish between variants (e.g. "第1部 ファントムブラッド" repeats across 9 tab groups) Over 50% of headers are generic across tab variants. When serialized, agents cannot tell which section belongs to which variant.
- ✅ **Markdown Code Fence Validity** — All 170 code fences properly closed across 15 pages

### URL Stability and Redirects

- **Score:** 96/100 · **Status:** fail
- **Summary:** 1 failed across 2 AFDocs checks.
- **Rationale:** Stable URLs and sane redirect behavior prevent retrieval drift and broken tool references.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ❌ **Http Status Codes** — 1 of 15 sampled pages return 200 for non-existent URLs (soft 404) Your site returns 200 for non-existent pages (soft 404). Agents try to extract information from the error page content instead of recognizing the page is missing. Configure your server to return 404 for pages that don't exist.
- ✅ **Redirect Behavior** — All 1 redirect(s) across 15 sampled pages are same-host HTTP redirects

### Observability and Content Health

- **Score:** 67/100 · **Status:** fail
- **Summary:** 2 failed across 3 AFDocs checks.
- **Rationale:** Coverage, parity, and cache behavior determine whether agents can trust the content they retrieve.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ❌ **LLMS TXT Freshness** — llms.txt covers 251/610 sitemap doc pages (41%); 359 missing; 10 llms.txt links not in sitemap (may indicate stale links or incomplete sitemap) Your llms.txt covers less than 80% of your site's pages. 359 live pages are missing from the index. Regenerate llms.txt from your sitemap or build pipeline.
- ❌ **Markdown Content Parity** — 1 of 14 pages have substantive content differences between markdown and HTML (avg 7% missing) 1 pages have substantive content differences between markdown and HTML (avg 7% missing). Agents receiving the markdown version are getting outdated or incomplete content. Regenerate markdown from source or fix the build pipeline.
- ✅ **Cache Header Hygiene** — All 16 endpoints have appropriate cache headers

### Authentication and Access

- **Score:** 100/100 · **Status:** partial
- **Summary:** 1 skipped across 2 AFDocs checks.
- **Rationale:** Agents need either public access or a clear alternative path when documentation is gated behind auth.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ✅ **Auth Gate Detection** — All 15 sampled pages are publicly accessible
- ⏭️ **Auth Alternative Access** — All docs pages are publicly accessible; no alternative access paths needed

### Full Content Discoverability

- **Score:** 75/100 · **Status:** fail
- **Summary:** llms-full.txt has 1 failing check.
- **Rationale:** A full-document snapshot gives long-context agents a single canonical corpus to ingest without repeated crawling.
- **Reference:** [llms-full.txt guide](https://www.mintlify.com/docs/ai/llmstxt#llms-full-txt)

**Checks**

- ✅ **LLMS Full Exists** — Found llms-full.txt.
- ✅ **LLMS Full Size** — llms-full.txt size is within the expected range.
- ❌ **LLMS Full Valid** — llms-full.txt is missing the expected markdown structure.
- ✅ **LLMS Full Links Resolve** — llms-full.txt links resolve successfully.

### Agent Skills

- **Score:** 0/100 · **Status:** fail
- **Summary:** skill.md has 1 failing check.
- **Rationale:** Agent skills provide product-specific operating guidance that plain documentation pages do not encode on their own.
- **Reference:** [skill.md guide](https://www.mintlify.com/docs/ai/skillmd)

**Checks**

- ❌ **Skill MD** — No agent skill definition was discovered.

### MCP Server

- **Score:** 0/100 · **Status:** fail
- **Summary:** MCP has 1 failing check.
- **Rationale:** A discoverable MCP server lets agents use first-class tools instead of scraping pages and inferring behavior.
- **Reference:** [MCP guide](https://www.mintlify.com/docs/ai/model-context-protocol)

**Checks**

- ❌ **MCP Server Discoverable** — No MCP server was discovered at the expected endpoints.
- ⏭️ **MCP Tool Count** — Skipped because the MCP server was not discoverable.
