Skip to main content
GenieHelper is not a SaaS product. It runs on a dedicated server you subscribe to, powered by locally-hosted AI models — uncensored, unfiltered, and governed by your rules, not a corporate content policy. Every piece of data your business generates stays on that server: fan relationships, platform credentials, earnings history, media assets.
GenieHelper runs on IONOS dedicated hardware — Ubuntu 24, 16 GB RAM, 1 TB NVMe. No cloud uploads, no vendor data harvesting, no OpenAI dependency.

What GenieHelper does

CapabilityWhat actually happens
Multi-platform managementConnect OnlyFans, Fansly, Instagram, TikTok from one interface — scrape stats, track engagement, manage sessions
AI content draftingUncensored Ollama models write captions, fan messages, and post concepts in your voice
Content taxonomyA 3,205-node proprietary adult taxonomy classifies every piece of content automatically
Post schedulingAI-suggested calendar with tier-enforced queue limits and BullMQ reliability
Fan intelligencePer-fan CRM with 71 engagement fields, purchase history, message sentiment, and follow-up tasks
Media pipelineWatermarking, teaser clipping, thumbnail generation, and metadata stripping — all local FFmpeg
HITL scrapingHuman-in-the-loop browser automation for platforms that block headless Chrome
Private AI chatDolphin 3 8B (Llama 3.1) + Qwen 2.5 running locally — zero OpenAI dependency
Custom content requestsTrack fan requests from initial ask through negotiation, fulfillment, and payout
Leak tracingFan-specific steganographic watermarks embedded at delivery — identify the source of any leak
Fan memory injectionDurable per-fan facts surface automatically when drafting replies
Per-fan voice personasEach fan gets a customized AI writing style — vocabulary, tone, and formatting specific to that relationship
Creator persona systemDefine your own voice with lexicon rules, banned phrases, and tone preferences
Automation triggersRule-based chains fire on fan behavior: win-back, upsell, birthday, re-engagement — all auditable and pausable

Capability areas

Stage UI

The theatrical UI model where the AI drives the interface. Center Stage, Left/Right Wings, The Pit, and the NavRail — assembled by the agent, not the user.

Fan CRM

71-field fan profiles, cross-platform identity linking, fan memory injection, per-fan AI personas, scoring, and subscription lifecycle tracking.

Content & Publishing

Idea inbox, post drafts, editorial series, campaign management, scheduling queue, and the creator persona system.

Media Pipeline

BullMQ-powered local FFmpeg processing — convert, resize, crop, watermark, clip, compress, and steganographically mark every asset.

Analytics

Earnings history, creator goals, platform growth curves, campaign ROI, and live post performance snapshots.

Automation & Messaging

Message templates, segment broadcasts, rule-based automation chains, and the HITL approval queue.

Subscription tiers

TierPriceAudience
Starter$0/moTry the platform
Creator$49/moSolo creators
Pro$149/moPower users with advanced fan CRM access
Studio$499/moAgencies and multi-account creators
Feature availability is enforced at the tier level. The subscriptionValidator.js middleware checks quotas against tier_rate_limits.json on every API call. The Pro and Studio tiers unlock extended fan CRM access via the Directus Pro Policy.

Technical foundation

LayerTechnology
LLM inferenceOllama — Dolphin3 8B, dolphin-mistral 7B, Qwen 2.5 (CPU-bound, ~4.8 GB pinned)
Agent orchestrationAnythingLLM fork — multi-user, per-workspace isolation
CMS / data layerDirectus 11 — all collections, flows, and extensions
Primary databasePostgreSQL 16 — pgvector (RAG), AGE (graph), supabase_vault (credential encryption)
Job queueBullMQ + Redis — concurrency 1, sequential execution, 9 core job types
Browser automationStagehand (Playwright) — HITL escalation for blocked platforms
FrontendReact 18 + Vite + Tailwind — 20 pages, 17 panels
Agent toolsUnified MCP server — 83 tools across 6 namespaced plugins
MemoryPython + DuckDB — 191 skills, Hebbian decay, RRF retrieval, LIF neurons

Data sovereignty

All inference runs locally via Ollama. Nothing leaves the VPS. Platform credentials are encrypted at rest using AES-256-GCM and stored in the PostgreSQL vault extension. Fan data, earnings, and media assets never feed an external training pipeline.
GenieHelper is designed for adult content creators and handles sensitive data. All media processing is local. No asset is transmitted to a third-party service during processing.

Build docs developers (and LLMs) love