Skip to main content

Welcome to Open Higgsfield AI

Open Higgsfield AI is an ambitious open-source project dedicated to replicating the full functionality of the Higgsfield platform. It’s a free, self-hosted AI image, video, and cinema studio that brings Higgsfield-style creative workflows to everyone. Open Higgsfield Studio Interface

What is Open Higgsfield AI?

Open Higgsfield AI is a comprehensive creative toolkit powered by Muapi.ai that provides access to over 200 state-of-the-art AI models for:
  • Text-to-Image Generation — 50+ models including Flux Dev, Nano Banana 2, Seedream 5.0, Ideogram v3, Midjourney v7
  • Image-to-Image Transformation — 55+ models with support for up to 14 reference images
  • Text-to-Video Generation — 40+ models including Kling v3, Sora 2, Veo 3, Wan 2.6, Seedance 2.0
  • Image-to-Video Animation — 60+ models for animating still images into video
  • Cinema Studio — Professional camera controls with lens, focal length, and aperture settings

Key Features

Multi-Studio Interface

Separate studios for images, videos, and cinematic generation with intelligent mode switching

Multi-Image Input

Upload up to 14 reference images for compatible models with batch upload support

Smart Controls

Dynamic aspect ratio, resolution, quality, and duration pickers that adapt to each model

Generation History

Browse, revisit, and download all past generations persisted in browser storage

Upload History

Reuse previously uploaded reference images across sessions without re-uploading

Responsive Design

Dark glassmorphism UI that works seamlessly on desktop and mobile devices

Why Open Higgsfield AI?

Choosing Open Higgsfield AI over proprietary alternatives gives you:
Free & Open Source — No subscription fees, no vendor lock-in. MIT licensed code you can modify and extend.
Self-Hosted — Your data stays on your machine. API keys stored locally in browser localStorage.
200+ Models — Access to the latest text-to-image, image-to-image, text-to-video, and image-to-video models.
Extensible Architecture — Built with vanilla JavaScript and Vite. Easy to add your own models and customize the UI.

Comparison with Higgsfield AI

FeatureHiggsfield AIOpen Higgsfield AI
CostSubscription-basedFree (open-source)
ModelsProprietary200+ open & commercial
Multi-image inputLimitedUp to 14 images per request
Self-hostingNoYes
CustomizableNoFully hackable
Data privacyCloud-basedYour data stays local
Source codeClosedMIT licensed

Architecture Overview

Open Higgsfield AI follows a component-based architecture using vanilla JavaScript, where each component is a function that returns a DOM element.
src/
├── components/
│   ├── ImageStudio.js     # Dual-mode t2i/i2i studio
│   ├── VideoStudio.js     # Dual-mode t2v/i2v studio
│   ├── CinemaStudio.js    # Pro cinema controls
│   ├── UploadPicker.js    # Multi-image upload
│   ├── CameraControls.js  # Camera settings picker
│   ├── Header.js          # Navigation & settings
│   ├── AuthModal.js       # API key input
│   └── SettingsModal.js   # Settings management
├── lib/
│   ├── muapi.js          # API client for Muapi.ai
│   ├── models.js         # 200+ model definitions
│   └── uploadHistory.js  # Upload history storage
└── styles/
    ├── global.css        # Global styles & animations
    ├── studio.css        # Studio-specific styles
    └── variables.css     # CSS custom properties
The project uses Vite for the build system, Tailwind CSS v4 for styling, and communicates with the Muapi.ai API for model inference.

Tech Stack

  • Vite — Lightning-fast build tool and development server
  • Vanilla JavaScript — No framework overhead, pure DOM manipulation
  • Tailwind CSS v4 — Utility-first CSS with custom design tokens
  • Muapi.ai — Unified API gateway for 200+ AI models

How It Works

Open Higgsfield AI uses a simple but powerful architecture:
  1. Component-Based UI — Each studio (Image, Video, Cinema) is a standalone component that manages its own state
  2. API Client Layer — The muapi.js client handles authentication, request submission, and polling for results
  3. Model Definitions — The models.js file contains metadata for all 200+ models, including endpoints, supported parameters, and input schemas
  4. Local Storage — API keys, generation history, and upload history are stored locally in the browser

API Integration Pattern

The app communicates with Muapi.ai using a two-step pattern:
1

Submit

Send a POST request to /api/v1/{model-endpoint} with prompt and parameters. Receive a request_id in response.
2

Poll

Make repeated GET requests to /api/v1/predictions/{request_id}/result until status is completed, succeeded, or failed.
3

Display

Extract the image or video URL from the response and display it in the studio interface.
During development, Vite proxies /api requests to https://api.muapi.ai to avoid CORS issues. In production, requests go directly to the Muapi API.

Current State & Future Direction

The Image Studio and Video Studio are fully operational, featuring:
  • Premium dark-mode glassmorphism UI
  • Generation and upload history management
  • Multi-model support with dynamic control adaptation
  • Multi-image input for compatible models
The architecture is designed to scale for:
  • Model training interfaces
  • Advanced editing tools (in-painting, out-painting)
  • User accounts and cloud sync
  • Custom model integration

Next Steps

Ready to get started? Follow our Quickstart guide to clone the repository, install dependencies, and generate your first AI image or video. For detailed installation instructions and troubleshooting, see the Installation guide.

Build docs developers (and LLMs) love