Skip to main content

Introduction

SENTi-radar’s edge functions are serverless Deno-based endpoints deployed on Supabase that power the entire sentiment analysis workflow. These functions handle data collection from multiple social platforms, AI-powered sentiment analysis, insight generation, and automated monitoring.

Architecture

The edge functions follow an orchestrator pattern:
  1. analyze-topic - Main orchestrator that coordinates all other functions
  2. Data Collection - Platform-specific scrapers (Twitter/X, Reddit, YouTube)
  3. Analysis - Gemini-powered sentiment and emotion detection
  4. Insights - AI-generated reports and recommendations
  5. Monitoring - Scheduled scans and crisis alerts

Function Catalog

analyze-topic

Orchestrator function that creates topics and coordinates data fetching and analysis

fetch-twitter

Scrapes X/Twitter posts via Scrape.do with YouTube and Parallel.ai fallbacks

fetch-reddit

Extracts Reddit discussions and comments using web scraping

fetch-youtube

Fetches YouTube video comments using official Google API

analyze-sentiment

AI-powered sentiment and emotion classification using Gemini 1.5 Flash

generate-insights

Generates strategic recommendations and analysis reports with Gemini 2.0 Flash

scheduled-monitor

Automated background monitoring and crisis alerting system

Authentication

All edge functions require Supabase authentication:
curl -X POST https://your-project.supabase.co/functions/v1/function-name \
  -H "Authorization: Bearer YOUR_SUPABASE_ANON_KEY" \
  -H "Content-Type: application/json" \
  -d '{"topic_id": "uuid-here"}'
Most functions are designed to be called internally by the orchestrator using the service role key. Only analyze-topic and generate-insights are typically called directly from client applications.

Environment Variables

SUPABASE_URL
string
required
Your Supabase project URL (auto-injected)
SUPABASE_SERVICE_ROLE_KEY
string
required
Service role key for database access (auto-injected)
SCRAPE_DO_TOKEN
string
API token for Scrape.do web scraping service (required for X/Twitter and Reddit)
YOUTUBE_API_KEY
string
Google YouTube Data API v3 key (required for YouTube comments)
GEMINI_API_KEY
string
Google Gemini API key for AI analysis and insights (required for sentiment analysis)
PARALLEL_API_KEY
string
Parallel AI API key for web search fallback (optional)

CORS Configuration

All functions include CORS headers allowing cross-origin requests:
const corsHeaders = {
  "Access-Control-Allow-Origin": "*",
  "Access-Control-Allow-Headers": "authorization, x-client-info, apikey, content-type"
};

Error Handling

All functions return consistent error formats:
Error Response
{
  "success": false,
  "error": "Descriptive error message"
}
Functions designed for internal orchestrator use (like fetch-twitter, analyze-sentiment) return success: false with HTTP 200 instead of 5xx errors to allow graceful degradation in the pipeline.

Rate Limits

External API Limits:
  • Scrape.do: Varies by plan (quota errors return scrape_status: "quota")
  • YouTube API: 10,000 units/day (each search = 100 units, comments = 1 unit)
  • Gemini API: 15 RPM free tier, 360 RPM paid tier
  • Parallel AI: Varies by plan

Next Steps

Quick Start

Get started with your first sentiment analysis

Analyze Topic

Main orchestration endpoint

Fetch Twitter

X/Twitter data collection

Analyze Sentiment

AI sentiment analysis

Build docs developers (and LLMs) love