7-tool plugin for taxonomy graph operations — semantic search, LLM-assisted content tagging, term mapping, graph ingestion, rebuild, pruning, and Hebbian edge reinforcement across 3,205 nodes.
Plugin ID:taxonomy.core | Version: 1.1.0 | Tools: 7The taxonomy.core plugin operates against a 3,205-node taxonomy graph (Nodes/Universe/taxonomy_graph.json) that classifies adult content across 18 super-concepts. It handles content tagging, term normalization, graph mutation, and the nightly Hebbian learning cycle. The graph is loaded once at first call and cached in module scope.
The search index stores compound tags both as full hyphenated terms and as individual tokens:
Tag
Also searchable as
big-ass
big, ass
18-girl
18, girl
foot-fetish
foot, fetish
gay-man
gay, man
LLM output is normalized before graph lookup: spaces become hyphens ("big ass" → "big-ass"), multi-word phrases are slugified ("18 year girl" → "18-year-girl").
The primary content classification tool. Takes a content description (or raw metadata), sends it to Ollama for tag extraction, normalizes the LLM output, validates each tag against the graph, and returns a tiered result.Pipeline:
content description └─ Ollama generate (extract tags as comma-separated list) └─ normalize output (lowercase, spaces → hyphens) └─ validate each tag against taxonomy index └─ split into primary/secondary/tertiary by usage count threshold
Store all three tiers in user_nodes when classifying a creator’s content. Primary tags drive platform discoverability; tertiary tags drive niche audience targeting and correlation learning.
Resolves an arbitrary term (which may or may not exist in the graph) to its canonical super_concept via MAPS_TO_CONCEPT edges. Falls back to representative_tags matching if no direct edge exists.Use this when ingesting external data with platform-specific tag vocabularies that need to be normalized to GenieHelper’s taxonomy.
Adds new nodes and edges to the taxonomy graph. Deduplicates by node label before inserting. Writes atomically via rename-on-tmp to prevent partial writes.Parameters:
taxonomy.rebuild-graph — invalidate cache and reload
Invalidates the in-memory _graph and _index caches and forces a full reload from taxonomy_graph.json on the next call. Returns node count, edge count, and index entry count post-reload.Call this after any external modification to taxonomy_graph.json or after a large batch ingest.
Removes orphaned nodes (zero edges) and nodes below a min_count threshold. Writes atomically. Calls invalidateCache() after write to force reload on next access.Parameters:
min_count — minimum usage count to retain a node (default: 1)
dry_run — (default: true) returns removal plan without writing
Boosts CO_OCCURS edge weights for a set of co-appearing tags, implementing Hebbian co-occurrence learning (“neurons that fire together, wire together”). Creates new CO_OCCURS edges if none exist between the provided tags.Parameters:
tags — array of tag labels that appeared together in a piece of content
delta — weight increment (default: 1)
persist — if true, mutates _graph in-place and writes to disk
This tool is called by the nightly Hebbian cron (scripts/cron/taxonomy-hebbian.mjs) with a global decay of 0.995 applied to all CO_OCCURS edges after boosting.
A separate cron job runs at 3 AM daily to propagate engagement signals into the graph:
1. Fetch user_nodes updated in last 24h from Directus2. Group tags by creator3. taxonomy.strengthen for each co-occurring tag pair4. Apply global decay (0.995) to all CO_OCCURS edges5. Prune edges below min_weight6. Persist atomically
This keeps the CO_OCCURS edge weights aligned with actual creator content patterns rather than generic platform co-occurrence data.
The taxonomy graph feeds directly into the memory layer’s synaptic propagation retrieval. When memory.recall’s activate_skills pipeline propagates a stimulus query, CO_OCCURS edges from the taxonomy graph act as associative pathways — a tag activated by a content query pulls in its strongly co-occurring neighbors, widening the skill activation net.