Skip to main content
Hayon is a monorepo with a clear, layered architecture. The Next.js frontend and the Express backend are fully independent applications that communicate exclusively over HTTP REST APIs and WebSockets. There is no shared runtime code between them. This page covers the system topology, component responsibilities, data flow, background job processing, real-time communication, and the branch strategy used for deployments.

System overview

In production, Nginx sits in front of the Express server as a reverse proxy, handling TLS termination and request forwarding. In development, the Express server runs directly with a local HTTPS certificate.

Component breakdown

Package: frontend/Built with Next.js 16 using the App Router, React 19, Tailwind CSS v4, and shadcn/ui (Radix UI primitives). The frontend is a pure client-consumer: it never touches MongoDB or any infrastructure service directly. All data operations go through the Express REST API.Key responsibilities:
  • Renders the post composer, dashboard, analytics views, and settings pages
  • Uploads media to AWS S3 via a signed URL received from the backend
  • Maintains a Socket.IO connection to receive real-time post status updates
  • Manages authentication state using JWT access and refresh tokens stored in HTTP-only cookies
  • Renders analytics charts using Recharts
The frontend is deployed to Vercel in production.
Package: backend/ Entry point: backend/src/app.tsBuilt with Express v5 and TypeScript. The server bootstraps in sequence: MongoDB → Redis → RabbitMQ → analytics cron → HTTP/HTTPS server with Socket.IO.API routes are all mounted under /api:
PrefixPurpose
/api/authRegistration, login, Google OAuth, OTP verification
/api/postsCreate, list, update, delete, retry posts; media upload
/api/platformConnect and manage social platform credentials
/api/generateAI caption generation via Google Gemini
/api/analyticsPost and account analytics
/api/paymentsStripe checkout, subscription management, billing portal
/api/notificationsUser notification list and read status
/api/adminUser management and platform-level admin controls
/api/profileUser profile reads and updates
/api/firebaseFirebase Cloud Messaging token registration
The /api/payments/webhook route receives the raw request body (not JSON-parsed) so that Stripe webhook signature verification works correctly.Security middleware: Helmet (HTTP headers), CORS (origin allowlist), cookie-parser, and Morgan (structured JSON request logs forwarded to Better Stack via Winston).
Client library: Mongoose v8 Config: backend/src/config/database.tsMongoDB is the primary datastore. It holds users, posts (with per-platform status sub-documents), platform credentials, notifications, analytics snapshots, and subscription records.The connection uses autoIndex: false in all environments to prevent index builds from blocking startup. Indexes should be created manually or via migration scripts.The database module implements automatic reconnection: if the connection drops, it retries every 5 seconds.
Client library: redis v4 Config: backend/src/config/redis.tsRedis connects using REDIS_HOST and REDIS_PORT environment variables. It is used for short-lived caching, rate-limit counters, and any session-adjacent data that benefits from fast in-memory reads.The client logs connection errors and successful connections through Winston.
Client library: amqplib Config: backend/src/config/rabbitmq.ts Worker entry: backend/src/workers/index.tsRabbitMQ is the message broker for all asynchronous jobs. It decouples HTTP request handling from social platform API calls and analytics fetching. See the background job processing section for the full queue topology.
Scripts: pnpm run worker (inside backend/) Files: backend/src/workers/posting.worker.ts, backend/src/workers/analytics.worker.tsWorkers run as a separate Node.js process from the web server. They connect to MongoDB and RabbitMQ independently. Two consumers run concurrently in the same worker process:
  • PostWorker — processes social media publishing jobs from the social_posts queue
  • AnalyticsWorker — processes analytics fetch jobs from the analytics_fetch queue
Prefetch is set to 1 per worker, meaning each worker processes one message at a time before acknowledging and picking up the next.

Data flow: from post creation to publishing

The sequence below shows the full lifecycle of a post, from user action to platform publication:

Step-by-step

  1. Media upload — The user attaches an image. The frontend sends it to POST /api/posts/media-upload. The backend uploads it to AWS S3 and returns the object URL.
  2. Caption generation (optional) — The user clicks Generate Caption. The backend calls the Google Gemini API and streams back platform-appropriate suggestions.
  3. Post submission — The user submits the post form. For each selected platform, the backend creates a job message and publishes it to RabbitMQ.
    • Immediate posts → POST_EXCHANGE (topic, routing key post.create.<platform>)
    • Scheduled posts → POST_DELAYED_EXCHANGE (x-delayed-message, same routing key with a x-delay header in milliseconds)
  4. Queue routing — Both exchanges route matching messages to the social_posts queue via the post.create.* binding.
  5. Worker processing — The PostWorker dequeues a message and: a. Checks if the post was cancelled (skip and ACK if so) b. Checks for duplicate delivery (idempotency: if the platform already has status=completed, skip) c. Validates stored platform credentials d. Calls the platform-specific posting service e. Updates the per-platform status in MongoDB
  6. Notification — On success or permanent failure, the worker calls NotificationService.createNotification(), which emits a Socket.IO event to the user’s room and creates a Firebase push notification.

Background job processing

Hayon uses a carefully structured RabbitMQ topology to handle retries, dead letters, and scheduled delivery.

Queue topology

Exchanges

ExchangeTypePurpose
POST_EXCHANGEtopicImmediate post delivery
POST_DELAYED_EXCHANGEx-delayed-messageScheduled post delivery (requires plugin)
DLX_EXCHANGEdirectRoutes failed messages to dead letter or retry queues
ANALYTICS_EXCHANGEtopicAnalytics fetch job delivery

Queues

QueuePurpose
social_postsMain processing queue for all post jobs
analytics_fetchAnalytics data fetch jobs
retry_queueHolds retryable messages with a TTL; routes back to POST_EXCHANGE on expiry
dead_lettersPermanent failures and unroutable messages for inspection
parking_lotMessages that have exhausted all retry attempts

Retry logic

When a job fails, the worker checks two conditions before deciding whether to retry:
  1. Attempt count — fewer than 3 attempts recorded in the post’s platformStatuses sub-document
  2. Error type — the error is classified as retryable (rate-limit responses, network timeouts, ECONNRESET, ENOTFOUND)
If both conditions are met, the message is sent to retry_queue with a TTL (the delay grows with each attempt). When the TTL expires, RabbitMQ routes it back to POST_EXCHANGE for another processing attempt. After three failed attempts, the message goes to parking_lot and the post status is set to failed.
You can inspect parking_lot messages using the RabbitMQ Management UI (default: http://localhost:15672) to diagnose persistent failures without losing the original message payload.

Analytics jobs

A separate AnalyticsCronService runs on the backend server and periodically publishes messages to ANALYTICS_EXCHANGE with the routing key analytics.fetch.*. The AnalyticsWorker consumes these from the analytics_fetch queue and writes results back to MongoDB.

Real-time updates via WebSockets

Hayon uses Socket.IO (v4) for real-time post status updates and notifications. Configuration: backend/src/config/socket.ts Socket.IO is initialised on the same HTTP/HTTPS server as Express. Authentication is enforced in a middleware layer: every connecting client must provide a valid JWT access token in socket.handshake.auth.token. The token is verified against ACCESS_TOKEN_SECRET, and the decoded userId is stored in socket.data.user. Upon successful authentication, the socket is added to a private room named after the user’s MongoDB _id. This means the worker can target notifications to exactly one user by emitting to userId — no broadcast, no leakage between accounts.
Worker emits to room "<userId>"  →  only that user's connected clients receive the event
The frontend connects using socket.io-client (v4) with the access token attached. When a post status changes, the user sees the update in the Posts list without refreshing.

External integrations

SDK: @aws-sdk/client-s3, @aws-sdk/s3-request-presignerAll user-uploaded media (images, etc.) is stored in a dedicated S3 bucket. The backend generates presigned URLs for uploads and returns the final object URL for inclusion in post payloads sent to social platforms.Required variables: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION, AWS_S3_BUCKET_NAME.
SDK: stripe v19Stripe handles all payment operations. The integration covers:
  • Checkout sessions — initiated by POST /api/payments/checkout to upgrade to Pro
  • Billing portal — lets users manage or cancel subscriptions via Stripe’s hosted UI
  • WebhooksPOST /api/payments/webhook receives Stripe lifecycle events (subscription created, updated, deleted) and updates the user’s plan in MongoDB
The webhook route receives the raw body before JSON parsing so that the stripe.webhooks.constructEvent() signature check passes.Required variables: STRIPE_SECRET_KEY, STRIPE_PUBLISHABLE_KEY, STRIPE_WEBHOOK_SECRET, STRIPE_PRO_PRICE_ID.
SDK: @google/genaiThe POST /api/generate endpoint accepts post content and calls the Gemini API to produce platform-specific caption suggestions. Usage is metered per user: Free plan users get 15 generations/month, Pro users get 30.Required variable: GEMINI_API_KEY.
Library: passport-google-oauth20Passport.js orchestrates the Google OAuth 2.0 flow. On first login, a new user document is created in MongoDB with the Google ID and display name. On subsequent logins, only lastLogin is updated. If an email already exists under a different provider, the login is rejected with an explicit email_exists_different_provider error.Required variables: GOOGLE_CLIENT_ID, GOOGLE_CLIENT_SECRET, GOOGLE_CALLBACK_URL.
SDKs: firebase-admin (backend), firebase (frontend)Firebase Cloud Messaging delivers push notifications to users’ browsers or mobile devices. The backend initialises the Admin SDK using a service account key file (serviceAccountKey.json). The frontend registers FCM tokens via POST /api/firebase, which are stored against the user record and used when the worker dispatches a notification after publishing.
Each supported platform has a dedicated posting service and OAuth routes:
PlatformProtocolLibrary / Approach
BlueskyAT Protocol@atproto/api
FacebookMeta Graph APIaxios + OAuth 2.0
ThreadsThreads APIaxios + Meta OAuth
TumblrTumblr API v2oauth-1.0a + axios
MastodonMastodon REST APIaxios + OAuth 2.0
Credentials (access tokens, refresh tokens) are stored per-user in MongoDB and retrieved by the worker before each posting attempt. If credentials are invalid, the job is ACKed immediately without retry — invalid credentials are a permanent failure condition.
Library: nodemailer Config: backend/src/config/mailer.tsTransactional emails (OTP verification, password reset) are sent through Gmail’s SMTP service. The transport is configured with EMAIL_USER and EMAIL_PASS (a Gmail app password, not your account password).

Repository structure

hayon/
├── frontend/
│   ├── src/
│   │   ├── app/            # Next.js App Router (pages, layouts)
│   │   ├── components/     # Reusable UI components
│   │   ├── context/        # React context providers
│   │   ├── hooks/          # Custom React hooks
│   │   ├── lib/            # Utilities and helpers
│   │   ├── services/       # API client functions
│   │   └── types/          # TypeScript type definitions
│   ├── public/
│   ├── package.json
│   └── .env.local

├── backend/
│   ├── src/
│   │   ├── config/         # Database, Redis, RabbitMQ, Socket, env, plans
│   │   ├── controllers/    # Route handler functions
│   │   ├── routes/         # Express router definitions (+ platforms/)
│   │   ├── models/         # Mongoose schemas
│   │   ├── repositories/   # Data access layer
│   │   ├── interfaces/     # TypeScript interfaces
│   │   ├── middleware/     # Auth, error, rate-limit middleware
│   │   ├── services/       # Business logic (posting, notifications, cron)
│   │   ├── workers/        # posting.worker.ts, analytics.worker.ts
│   │   ├── ai/             # Gemini integration helpers
│   │   ├── lib/            # Queue types, DLX setup
│   │   ├── utils/          # Logger, response helpers
│   │   └── app.ts          # Express bootstrap
│   ├── package.json
│   └── .env

└── README.md
Frontend and backend share a @hayon/schemas workspace package (workspace:*) for Zod validation schemas. This is the only code shared between the two packages.

Branch strategy

Hayon uses a three-branch promotion strategy to keep production stable:
feature/* → dev → staging → main
BranchPurposeRules
mainProductionAlways stable. No direct commits. Deployed to live.
stagingPre-productionQA and testing environment. Mirrors production config.
devActive developmentAll feature branches merge here first.
Feature branches follow the naming convention:
feature/ai-captions
feature/post-scheduler
feature/stripe-integration
fix/auth-token-refresh
hotfix/critical-bug
Skipping the staging branch — merging directly from dev to main — is how bugs reach production. Auth and payment logic in particular must pass through staging before going live.

Logging and observability

The backend uses Winston for structured logging with two transports in production:
  • DailyRotateFile — writes JSON logs to disk with daily rotation
  • Logtail / Better Stack (@logtail/winston) — streams logs to Better Stack for real-time search and alerting
HTTP request logs are captured by Morgan and forwarded to Winston as structured JSON objects (method, endpoint, status code, response time, timestamp). The BETTER_STACK_TOKEN environment variable is required at startup.

Next steps

Quickstart

Follow the step-by-step setup guide to run Hayon locally.

Platform integrations

Connect Bluesky, Facebook, Threads, Tumblr, and Mastodon accounts.

Self-hosting

Deploy Hayon to your own infrastructure with the full self-hosting guide.

API reference

Explore the full REST API surface.

Build docs developers (and LLMs) love