Cross-session agent memory for skipping redundant enrichment
Supermemory provides persistent, searchable memory for the JARVIS pipeline. When a person is researched once, their dossier is stored in Supermemory, allowing future encounters to skip expensive web agent research and retrieve cached intelligence instantly.
PimEyes reverse image search has strict rate limits
Supermemory acts as a smart cache:✅ Store complete dossiers after first enrichment
✅ Hybrid search (semantic + keyword) finds matches even with name variations
✅ Metadata filtering by source and timestamp
✅ Automatic relevance scoring to avoid false positives
Extract the dossier from Supermemory’s response format:
backend/memory/supermemory_client.py
def _parse_dossier(raw: str, name: str) -> dict | None: """Extract dossier dict from SuperMemory memory/chunk.""" try: obj = json.loads(raw) if isinstance(obj, dict) and "dossier" in obj: return obj["dossier"] if isinstance(obj, dict): return obj except (json.JSONDecodeError, TypeError): pass # SuperMemory may return summarized text instead of raw JSON if raw and name.lower() in raw.lower(): return {"raw_memory": raw} return None
Supermemory is checked before running expensive web agents:
backend/orchestration/pipeline.py
from backend.memory.supermemory_client import SuperMemoryClientfrom backend.agents.orchestrator import ResearchOrchestratorasync def enrich_person( person_name: str, photo_url: str, memory: SuperMemoryClient, orchestrator: ResearchOrchestrator,) -> dict: # 1. Check Supermemory cache cached = await memory.search_person(person_name) if cached: logger.info(f"Cache hit for {person_name}, skipping web research") return { "person_name": person_name, "photo_url": photo_url, "dossier": cached, "source": "supermemory_cache", } # 2. Cache miss — run full research pipeline logger.info(f"Cache miss for {person_name}, starting web research") research_result = await orchestrator.research_person( person_name=person_name, photo_url=photo_url, ) # 3. Store result in Supermemory for future use if research_result.get("dossier"): await memory.store_dossier( person_name=person_name, dossier_data=research_result["dossier"], ) return research_result
import asynciofrom backend.memory.supermemory_client import SuperMemoryClientasync def main(): async with SuperMemoryClient() as memory: # First encounter: cache miss print("First lookup...") result1 = await memory.search_person("Alice Smith") print(f"Result: {result1}") # None # Store dossier print("Storing dossier...") dossier = { "summary": "AI researcher at OpenAI. Stanford PhD.", "title": "Research Scientist", "company": "OpenAI", "work_history": [ { "role": "Research Scientist", "company": "OpenAI", "period": "2022-present" } ], "social_profiles": { "linkedin": "https://linkedin.com/in/alicesmith", "github": "https://github.com/alicesmith", }, } doc_id = await memory.store_dossier("Alice Smith", dossier) print(f"Stored with ID: {doc_id}") # Second encounter: cache hit print("Second lookup...") result2 = await memory.search_person("Alice Smith") print(f"Result: {result2['summary']}") # Cache hit! # Fuzzy match: slight name variation print("Fuzzy match...") result3 = await memory.search_person("A. Smith") print(f"Result: {result3['summary'] if result3 else 'No match'}") # May still match!if __name__ == "__main__": asyncio.run(main())
Output:
First lookup...Result: NoneStoring dossier...Stored with ID: sm_doc_xyz123Second lookup...Result: AI researcher at OpenAI. Stanford PhD.Fuzzy match...Result: AI researcher at OpenAI. Stanford PhD.
# Every person requires full researchawait exa.search(person_name) # ~2sawait browser_agents.research(urls) # ~45sawait synthesize_dossier(fragments) # ~5s# Total: ~52 seconds per person
Supermemory uses container tags to namespace data:
# All JARVIS dossiers use the same tag_CONTAINER_TAG = "specter-dossiers"# Store with tagpayload = { "content": dossier_json, "containerTags": [_CONTAINER_TAG], ...}# Search within tagsearch_payload = { "q": person_name, "containerTag": _CONTAINER_TAG, # Only search JARVIS data ...}
This prevents cross-contamination if you use Supermemory for other projects.