Overview
CareSupport isn’t a chatbot following a script—it’s a learning agent that adapts to your family’s needs. It routes messages to the right AI model based on complexity, assembles context from multiple sources, and writes corrections to its own instruction files when you teach it something new.The agent has memory. When you correct it (“Don’t say that”, “Remember this”), it writes the correction to
lessons.md immediately. You’ll see the fix applied in the next conversation.AI Backend: Anthropic + OpenRouter
CareSupport uses two AI providers:- Anthropic (Primary)
- OpenRouter (Fallback)
Models:Set backend:
- Claude Haiku 4.5 (fast tier) — greetings, schedule updates, general coordination
- Claude Sonnet 4.6 (reason tier) — medication changes, multi-member coordination, onboarding
- Claude Opus 4.6 (critical tier) — emergencies, escalation triggers
- Prompt caching reduces cost by 90% for repeated context (family file, skills, lessons)
- Native structured output (no JSON schema hacks)
- Better instruction-following than GPT-4o for care coordination tasks
Intent Routing: Fast, Reason, Critical
Every message is classified into a tier BEFORE the AI call. This determines which model to use:- Classification Logic
- Tier Costs
- Fallback Chain
- Zero latency (no extra API call)
- Zero cost (no tokens consumed)
- Deterministic (same message always routes the same way)
Context Assembly: What the Agent Sees
Every message triggers a multi-source context load:1. SOUL.md (Identity)
1. SOUL.md (Identity)
The agent’s core identity and reasoning framework.Loaded from:
SOUL.md at repository rootContent:- Four-step reasoning loop (LISTEN → REASON → ACT → CLOSE THE LOOP)
- Learning system explanation (“Your corrections become your instructions”)
- Voice guidelines (“Match the family’s register. Use names, not roles.”)
- Hard rules (“Never fabricate certainty about your own past actions”)
2. Agent Routing (agent_root.md)
2. Agent Routing (agent_root.md)
Tells the agent which skills and playbooks to load for different message types.Example:Why routing is explicit: The agent shouldn’t guess which protocol to follow. If a message is about scheduling, it loads
scheduling.md. If it’s about onboarding, it loads onboarding.md.3. Capabilities (CAN / CANNOT)
3. Capabilities (CAN / CANNOT)
Loaded from: Why capabilities are explicit: The agent needs gates, not guidelines. “You cannot make medical decisions” is a constraint, not a suggestion.
runtime/learning/capabilities.mdFormat:4. Skills (Conversation Patterns)
4. Skills (Conversation Patterns)
Loaded from:
runtime/learning/skills/*.mdExamples:onboarding.md— How to welcome new members and explain CareSupportsocial.md— How to handle greetings, gratitude, apologiesscheduling.md— How to coordinate rides, appointments, coverage
5. Lessons (Corrections)
5. Lessons (Corrections)
Two types:How lessons are created:
- Global lessons (
runtime/learning/lessons.md) — corrections from all families - Family lessons (
families/{id}/lessons.md) — corrections specific to this family
- User corrects the agent (“That’s wrong”, “Don’t say that again”)
- Agent captures it in
self_correctionsfield of response - System writes to
lessons.mdimmediately - Next message: agent sees the correction in its context
6. Family Context (Filtered by Access Level)
6. Family Context (Filtered by Access Level)
Loaded from:
families/{id}/family.md, schedule.md, medications.mdPre-filtered by role_filter.py (see Enforcement & Safety page)What the agent sees:- Full-access members: Everything
- Schedule+meds members: Schedule, medications, urgent notes (no insurance, no family-only discussions)
- Schedule-only members: Schedule and urgent notes only
- Limited members: Care recipient name and care team roster only
7. Member Profile
7. Member Profile
Loaded from:
families/{id}/members/{first_name}.mdContent:- Communication preferences (“Prefers texts over calls”)
- Care responsibilities (“Primary driver for Tuesday appointments”)
- Personal context (“Works downtown, flexible schedule”)
- Interaction history (“2026-02-27: Requested Yada be added to team”)
8. Recent Conversation History
8. Recent Conversation History
Loaded from: Why 50 lines? Enough to maintain conversation continuity (“What did I ask about earlier?”) without overwhelming the context window.
conversations/{phone}/{YYYY-MM}.logLast 50 lines:Prompt Caching: 90% Cost Reduction
Anthropic’s prompt caching lets you mark sections of the system prompt as “cacheable”. If the cached prefix hasn’t changed, you pay 1/10th the cost to reload it.- Cache Strategy
- Implementation
- Cost Example
Cached prefix (lasts 5 minutes):
- SOUL.md (identity) — never changes
- Routing + Capabilities + Skills — changes monthly
- Response format + channel guidance — never changes
- Lessons (global + family) — changes weekly
- Member identity + member profile — changes per member
Learning System: Self-Corrections
When you correct the agent, it writes the correction to its own instruction files:How It Works
How It Works
Your correction:Agent response (internal structure):What happens next:
- System writes correction to
families/kano/lessons.md - Correction is loaded into agent context on the next message
- Agent sees: “Degitu is Liban’s aunt, not grandmother”
- Agent never makes that mistake again (for this family)
Correction Categories
Correction Categories
[behavioral] — How to reason or respond
- Example: “When someone says ‘thanks’, respond with ‘You’re welcome’ not ‘Happy to help!’”
- Example: “Degitu’s work is Downtown Minneapolis, not St. Paul”
- Example: “Always populate needs_outreach when saying ‘I’ll message [name]’”
Staged Review (Testing Corrections Before Production)
Staged Review (Testing Corrections Before Production)
CareSupport has a staging system for testing corrections without mutating production data:Why staging exists: Without it, every test run writes to real files. Staging is a scratch pad—nothing touches production until you explicitly promote it.
Response Structure
The agent always responds with structured JSON (enforced by Anthropic’sresponse_format parameter):
Source Reference
- Intent routing:
runtime/scripts/care_router.py(route function, fallback_chain) - Prompt builder:
runtime/scripts/prompt_builder.py(build_system_blocks, cache-aware assembly) - AI generation:
sms_handler.py:487-625(generate_response for OpenRouter, _generate_response_anthropic for Anthropic) - Context assembly:
sms_handler.py:289-397(build_system_context, _channel_guidance) - Learning persistence:
sms_handler.py:628-683(_persist_lessons, _stage_corrections) - Skills directory:
runtime/learning/skills/(onboarding.md, social.md, scheduling.md) - Lessons:
runtime/learning/lessons.md(global),families/{id}/lessons.md(per-family)
Want to see how the agent learns? Read
sms_handler.py:628-683 (_persist_lessons) to see how corrections flow from self_corrections → lessons.md → next message context.