Skip to main content
Skill packs are collections of OpenClaw skills that are automatically wired to the services in your stack. When you add a service, its associated skills are generated with the correct connection details.

What is a skill pack?

A skill pack is a bundle of agent skills designed for a specific use case. Each skill pack:
  • Groups related skills (e.g., video processing, vector search, caching)
  • Requires specific services to function
  • Generates configuration with connection details from your stack
  • Provides ready-to-use templates for AI agents

Available skill packs

The better-openclaw repository includes 10+ skill packs:

Video Creator

Services: FFmpeg, Remotion, MinIO
Skills: Video processing, rendering, storage
Create and process videos programmatically

Research Agent

Services: Qdrant, SearXNG, Browserless
Skills: Vector search, web scraping, meta search
Web research with semantic memory

Social Media

Services: FFmpeg, Redis, MinIO
Skills: Media processing, caching, storage
Content processing and publishing

DevOps

Services: n8n, Redis, Uptime Kuma, Grafana, Prometheus
Skills: Workflow automation, monitoring, alerting
Infrastructure automation and observability

Knowledge Base

Services: Qdrant, PostgreSQL, Meilisearch
Skills: Vector search, full-text search, storage
Document indexing and retrieval

Local AI

Services: Ollama, Whisper
Skills: LLM inference, speech-to-text
Run AI models locally without APIs

Content Creator

Services: FFmpeg, Remotion, MinIO, Stable Diffusion
Skills: Video processing, image generation, storage
AI-powered content creation

AI Playground

Services: Ollama, Open WebUI, Qdrant, LiteLLM
Skills: Multi-model chat, vector memory, gateway
Experiment with multiple AI models

Coding Team

Services: Claude Code, Codex, Redis, PostgreSQL
Skills: Code generation, shared state, storage
AI coding agents with collaboration

Knowledge Hub

Services: Outline, Qdrant, Meilisearch, PostgreSQL
Skills: Wiki management, vector search, full-text search
Team knowledge base with AI search

How skills work

Each skill is a Markdown file with Handlebars templates that get compiled with your stack’s configuration:
---
name: redis-cache
description: "Cache data via Redis at {{REDIS_HOST}}:{{REDIS_PORT}}"
metadata:
  openclaw:
    emoji: "🔴"
---

# Redis Cache Skill

Redis is available at `{{REDIS_HOST}}:{{REDIS_PORT}}`.

## Caching a value

```bash
redis-cli -h {{REDIS_HOST}} -p {{REDIS_PORT}} -a $REDIS_PASSWORD SET mykey "myvalue" EX 3600

When you generate a stack with Redis, the `{{REDIS_HOST}}` and `{{REDIS_PORT}}` placeholders are replaced with actual values:

```markdown
Redis is available at `redis:6379`.

## Caching a value

```bash
redis-cli -h redis -p 6379 -a $REDIS_PASSWORD SET mykey "myvalue" EX 3600

## Skill structure

Skills are stored in the `skills/` directory:

skills/ ├── redis-cache/ │ └── SKILL.md # Redis caching operations ├── qdrant-memory/ │ └── SKILL.md # Vector storage and search ├── n8n-trigger/ │ └── SKILL.md # Workflow automation ├── ffmpeg-process/ │ └── SKILL.md # Video processing └── manifest.json # Skill registry

## Skill manifest

The `manifest.json` file maps skills to services:

```json
{
  "version": "1.0.0",
  "description": "OpenClaw skill templates",
  "skills": [
    {
      "id": "redis-cache",
      "path": "redis-cache/SKILL.md",
      "emoji": "🔴",
      "services": ["redis"]
    },
    {
      "id": "qdrant-memory",
      "path": "qdrant-memory/SKILL.md",
      "emoji": "🧠",
      "services": ["qdrant"]
    },
    {
      "id": "n8n-trigger",
      "path": "n8n-trigger/SKILL.md",
      "emoji": "⚡",
      "services": ["n8n"]
    }
  ]
}

Automatic skill installation

When you add a service, its skills are automatically included. The service definition declares which skills to install:
export const redisDefinition: ServiceDefinition = {
  id: "redis",
  name: "Redis",
  // ...
  skills: [{ skillId: "redis-cache", autoInstall: true }],
  // ...
};
During stack generation:
1

Service added

You select Redis for your stack
2

Skill detected

The generator sees skills: [{ skillId: "redis-cache" }]
3

Template compiled

The redis-cache/SKILL.md template is compiled with your Redis configuration
4

Skill installed

The compiled skill is written to skills/redis-cache.md in your output directory

Skill template variables

Skills can reference any environment variable from your stack:
VariableExampleSource
{{REDIS_HOST}}redisService hostname
{{REDIS_PORT}}6379Service port
{{REDIS_PASSWORD}}$REDIS_PASSWORDEnvironment variable
{{QDRANT_HOST}}qdrantService hostname
{{N8N_WEBHOOK_URL}}http://n8n:5678/Service configuration

Example: Qdrant memory skill

Here’s a real skill from the codebase:
---
name: qdrant-memory
description: "Store and search vector embeddings via Qdrant at {{QDRANT_HOST}}:{{QDRANT_PORT}}"
metadata:
  openclaw:
    emoji: "🧠"
---

# Qdrant Memory Skill

Qdrant vector database is available at `http://{{QDRANT_HOST}}:{{QDRANT_PORT}}`.

## Creating a collection

```bash
curl -X PUT "http://{{QDRANT_HOST}}:{{QDRANT_PORT}}/collections/openclaw_memory" \
  -H "Content-Type: application/json" \
  -d '{
    "vectors": {
      "size": 1536,
      "distance": "Cosine"
    }
  }'

Searching for similar vectors

curl -X POST "http://{{QDRANT_HOST}}:{{QDRANT_PORT}}/collections/openclaw_memory/points/search" \
  -H "Content-Type: application/json" \
  -d '{
    "vector": [0.2, 0.1, 0.9, ...],
    "limit": 5,
    "with_payload": true
  }'

Tips for AI agents

  • Match the vector size to your embedding model (1536 for OpenAI, 384 for MiniLM)
  • Always include descriptive payload fields for actionable search results
  • Use with_vector: false when scrolling to reduce response size
  • Check health with curl http://{{QDRANT_HOST}}:{{QDRANT_PORT}}/healthz

## Using skills with OpenClaw agents

Once skills are generated, OpenClaw agents can reference them:

**Python agent:**

```python
import os
from openclaw import load_skill

# Load the compiled skill
redis_skill = load_skill("redis-cache")

# The skill contains connection details
print(redis_skill.host)  # "redis"
print(redis_skill.port)  # 6379
TypeScript agent:
import { loadSkill } from '@openclaw/sdk';

const qdrantSkill = await loadSkill('qdrant-memory');

// Access compiled skill content
console.log(qdrantSkill.description);
console.log(qdrantSkill.examples);

Skill best practices

Show agents exactly how to connect to services with curl, CLI commands, or SDK snippets.
Document common errors and how to debug them (e.g., authentication failures, network issues).
Prefix keys/collections with openclaw: to avoid conflicts (e.g., openclaw:cache:*, openclaw_memory).
Warn about memory usage, rate limits, or quota constraints.
Show how to verify service availability before running complex operations.

Creating custom skills

You can add your own skills to the repository:
1

Create skill directory

mkdir skills/my-custom-skill
2

Write skill template

Create skills/my-custom-skill/SKILL.md with Handlebars placeholders:
---
name: my-custom-skill
description: "Custom skill for {{SERVICE_NAME}}"
---

# My Custom Skill

Connect to {{SERVICE_HOST}}:{{SERVICE_PORT}}
3

Register in manifest

Add to skills/manifest.json:
{
  "id": "my-custom-skill",
  "path": "my-custom-skill/SKILL.md",
  "emoji": "🎯",
  "services": ["my-service"]
}
4

Link to service

Update your service definition:
skills: [{ skillId: "my-custom-skill", autoInstall: true }]

Related concepts

Learn how services and skills are connected

Build docs developers (and LLMs) love