# Help Wefunder

[help.wefunder.com](https://help.wefunder.com)

- **Overall score:** 39/100 (Grade F)
- **Checks passed:** 16 / 29
- **Last computed:** 2026-05-12

## Components

### Content Discoverability

- **Score:** 67/100 · **Status:** fail
- **Summary:** 2 failed across 6 AFDocs checks.
- **Rationale:** Agents need a clear entry point and crawl map before they can reliably discover the right pages.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ✅ **LLMS TXT Exists** — llms.txt found at 1 location(s)
- ✅ **LLMS TXT Valid** — llms.txt follows the proposed structure (H1, blockquote, heading-delimited link sections)
- ✅ **LLMS TXT Size** — llms.txt is 38,197 characters (under 50,000 threshold)
- ❌ **LLMS TXT Links Resolve** — Only 0/15 same-origin sampled links resolve (0%); 15 broken (1 external link also failed; may be bot-detection or rate-limiting) 16 of 17 links in your llms.txt return errors. A stale llms.txt with broken links is worse than no llms.txt at all because it sends agents down dead ends with high confidence.
- ✅ **LLMS TXT Links Markdown** — 15/15 same-origin sampled links point to markdown content (100%) (2 external links excluded)
- ❌ **LLMS TXT Directive** — No llms.txt directive found in any of 15 sampled pages No agent-facing directive pointing to llms.txt was detected on any tested page. Add a blockquote near the top of each page (e.g., "> For the complete documentation index, see [llms.txt](/llms.txt)"). This can be visually hidden with CSS while remaining accessible to agents.

### Markdown Availability

- **Score:** 0/100 · **Status:** fail
- **Summary:** 2 failed across 2 AFDocs checks.
- **Rationale:** When markdown is available directly, agents spend less effort stripping presentation markup and guessing structure.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ❌ **Markdown Url Support** — No sampled pages support .md URLs (0/15 tested) Your pages don't return markdown when .md is appended to the URL. Configure your docs platform to serve .md variants for all documentation pages.
- ❌ **Content Negotiation** — Server ignores Accept: text/markdown header (0/15 sampled pages return markdown) Your server ignores Accept: text/markdown and returns HTML. Some agents (Claude Code, Cursor, OpenCode) request markdown this way. Configure your server to honor content negotiation.

### Page Size and Truncation Risk

- **Score:** 0/100 · **Status:** fail
- **Summary:** 2 failed and 1 skipped across 4 AFDocs checks.
- **Rationale:** Large pages and delayed primary content increase truncation risk and make retrieval less reliable.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ❌ **Rendering Strategy** — 15 of 15 sampled pages appear to be client-side rendered SPA shells (__next detected); agents using HTTP fetches will see no content 15 of 15 pages use client-side rendering. Agents receive an empty shell with no documentation content. Enable server-side rendering or pre-rendering for documentation pages.
- ⏭️ **Page Size Markdown** — Skipped: dependency check did not pass
- ✅ **Page Size Html** — All 15 sampled pages convert under 50K chars (median 819, 71% boilerplate)
- ❌ **Content Start Position** — 15 of 15 sampled pages have content starting past 50% (worst 100%) 15 of 15 pages have content starting past 50% of the converted output. Agents may never see the documentation content. Move or remove inline CSS/JS that precedes the content area.

**Evidence**

- **Score cap:** 39 (rendering-strategy: 75%+ of pages affected)

### Content Structure

- **Score:** 100/100 · **Status:** pass
- **Summary:** 3 AFDocs checks pass.
- **Rationale:** Predictable sections, valid code fences, and serialized tabs make the content easier for agents to parse correctly.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ✅ **Tabbed Content Serialization** — No tabbed content detected across 15 sampled pages
- ✅ **Section Header Quality** — No tabbed content found; header quality check not applicable
- ✅ **Markdown Code Fence Validity** — All 0 code fences properly closed across 1 pages

### URL Stability and Redirects

- **Score:** 36/100 · **Status:** fail
- **Summary:** 1 failed across 2 AFDocs checks.
- **Rationale:** Stable URLs and sane redirect behavior prevent retrieval drift and broken tool references.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ❌ **Http Status Codes** — 15 of 15 sampled pages return 200 for non-existent URLs (soft 404) Your site returns 200 for non-existent pages (soft 404). Agents try to extract information from the error page content instead of recognizing the page is missing. Configure your server to return 404 for pages that don't exist.
- ✅ **Redirect Behavior** — No redirects detected across 15 sampled pages

### Observability and Content Health

- **Score:** 6/100 · **Status:** fail
- **Summary:** 1 failed and 2 skipped across 3 AFDocs checks.
- **Rationale:** Coverage, parity, and cache behavior determine whether agents can trust the content they retrieve.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ⏭️ **LLMS TXT Freshness** — No sitemap found; cannot assess llms.txt freshness without a sitemap as ground truth
- ⏭️ **Markdown Content Parity** — Skipped: dependency check did not pass
- ❌ **Cache Header Hygiene** — 15 of 16 endpoints have aggressive caching or missing cache headers 15 endpoints have aggressive caching (>24h) or missing cache headers. Set max-age under 3600 or add must-revalidate with ETag/Last-Modified so content updates reach agents promptly.

### Authentication and Access

- **Score:** 100/100 · **Status:** partial
- **Summary:** 1 skipped across 2 AFDocs checks.
- **Rationale:** Agents need either public access or a clear alternative path when documentation is gated behind auth.
- **Reference:** [AFDocs reference](https://afdocs.dev)

**Checks**

- ✅ **Auth Gate Detection** — All 15 sampled pages are publicly accessible
- ⏭️ **Auth Alternative Access** — All docs pages are publicly accessible; no alternative access paths needed

### Full Content Discoverability

- **Score:** 100/100 · **Status:** pass
- **Summary:** llms-full.txt passes all checks.
- **Rationale:** A full-document snapshot gives long-context agents a single canonical corpus to ingest without repeated crawling.
- **Reference:** [llms-full.txt guide](https://www.mintlify.com/docs/ai/llmstxt#llms-full-txt)

**Checks**

- ✅ **LLMS Full Exists** — Found llms-full.txt.
- ✅ **LLMS Full Size** — llms-full.txt size is within the expected range.
- ✅ **LLMS Full Valid** — llms-full.txt has a recognizable markdown structure.
- ✅ **LLMS Full Links Resolve** — llms-full.txt links resolve successfully.

### Agent Skills

- **Score:** 0/100 · **Status:** fail
- **Summary:** skill.md has 1 failing check.
- **Rationale:** Agent skills provide product-specific operating guidance that plain documentation pages do not encode on their own.
- **Reference:** [skill.md guide](https://www.mintlify.com/docs/ai/skillmd)

**Checks**

- ❌ **Skill MD** — No agent skill definition was discovered.

### MCP Server

- **Score:** 100/100 · **Status:** pass
- **Summary:** MCP passes all checks.
- **Rationale:** A discoverable MCP server lets agents use first-class tools instead of scraping pages and inferring behavior.
- **Reference:** [MCP guide](https://www.mintlify.com/docs/ai/model-context-protocol)

**Checks**

- ✅ **MCP Server Discoverable** — Found an MCP server.
- ✅ **MCP Tool Count** — The MCP server exposes tools.
