Skip to main content

AI Platforms & Chat UIs

Comprehensive AI platforms for building LLM applications, managing multiple models, and providing ChatGPT-like interfaces.

Available Services

AnythingLLM

Port: 3115 | Memory: 512 MB | Maturity: StableAll-in-one Desktop & Docker AI application with built-in RAG, AI agents, document chat, and multi-user support.Features:
  • Built-in RAG pipeline
  • AI agents support
  • Document chat
  • Multi-user workspace
  • Model agnostic
Recommends: OllamaDocumentation

Dify

Port: 3110 | Memory: 1024 MB | Maturity: StableOpen-source LLM app development platform with visual AI workflow builder, RAG pipeline, agent capabilities, and model management.Features:
  • Visual workflow builder
  • RAG pipeline
  • Agent capabilities
  • Model management
  • API and SDK
Requires: PostgreSQL, RedisDocumentation

Flowise

Port: 3120 | Memory: 256 MB | Maturity: StableDrag & drop UI to build customized LLM flows, chatbots, and AI agents visually.Features:
  • Drag-and-drop flow builder
  • Pre-built components
  • Custom integrations
  • LangChain support
  • API endpoints
Documentation

LibreChat

Port: 3090 | Memory: 512 MB | Maturity: StableEnhanced ChatGPT clone supporting multiple AI providers (Claude, GPT, Gemini, local models) with agents and code interpreter.Features:
  • Multi-provider support
  • ChatGPT-like interface
  • Agents and plugins
  • Code interpreter
  • Conversation management
Documentation

LiteLLM Proxy

Port: 4000 | Memory: 256 MB | Maturity: StableUnified gateway for 100+ LLM providers with load balancing, fallbacks, spend tracking, and caching.Features:
  • 100+ provider support
  • Load balancing
  • Automatic fallbacks
  • Spend tracking
  • Response caching
  • OpenAI-compatible API
Requires: LITELLM_MASTER_KEYDocumentation

Open WebUI

Port: 3080 | Memory: 256 MB | Maturity: StableBeautiful ChatGPT-like web interface for Ollama and other LLM providers. Features RAG, web search, and multi-user support.Features:
  • ChatGPT-like UI
  • Ollama integration
  • RAG support
  • Web search
  • Multi-user
  • Model management
Recommends: OllamaDocumentation

Usage Examples

Add AI platforms to your stack

npx create-better-openclaw --services dify,postgresql,redis --yes

Use the AI Playground preset

npx create-better-openclaw --preset ai-playground --yes

Combine multiple platforms

npx create-better-openclaw --services open-webui,litellm,ollama,flowise --yes

Platform Comparison

PlatformPrimary Use CaseRAGAgentsVisual BuilderMemory
AnythingLLMDocument chat & RAG512 MB
DifyFull platform1024 MB
FlowiseVisual flow building256 MB
LibreChatChat interface512 MB
LiteLLMAPI gateway256 MB
Open WebUIOllama interface256 MB

Architecture Patterns

Full RAG Stack

npx create-better-openclaw \
  --services dify,postgresql,redis,qdrant,meilisearch \
  --yes

Multi-Model Gateway

npx create-better-openclaw \
  --services litellm,ollama,open-webui \
  --yes

Visual Workflow Platform

npx create-better-openclaw \
  --services flowise,qdrant,redis \
  --yes

Integration Tips

  1. LiteLLM as Gateway: Use LiteLLM to unify access to multiple providers
  2. Ollama for Local Models: Pair platforms with Ollama for local inference
  3. RAG Requirements: Most platforms work best with vector databases (Qdrant, ChromaDB)
  4. Caching: Add Redis for improved performance
  5. Multi-User: Configure authentication and user management for production

Build docs developers (and LLMs) love