Skip to main content
The AT Protocol moderation system helps you handle content safety across various contexts. It combines moderator labels, user preferences, muting, blocking, and custom rules into a unified API.

What is Moderation?

The moderation system handles:
  • Labels: Applied by moderators or self-labeled by authors
  • Muting: User-initiated muting of accounts or keywords
  • Blocking: User-initiated blocking of accounts
  • Mutelists: Shared lists of accounts to mute
  • Blocklists: Shared lists of accounts to block
  • Hidden posts: User-specific hidden content
  • Mute words: Keyword filtering

Core Concepts

Moderation Context

Every moderation decision requires context:
import { ModerationOpts } from '@atproto/api'

const moderationOpts: ModerationOpts = {
  // The logged-in user's DID
  userDid: agent.did,
  
  // User's moderation preferences
  moderationPrefs: prefs.moderationPrefs,
  
  // Label definitions from labelers
  labelDefs: labelDefs
}

UI Contexts

Moderation decisions vary by context:
  • profileList: Profiles in search results or follower lists
  • profileView: A profile being viewed directly
  • avatar: User avatars
  • banner: User banners
  • displayName: User display names
  • contentList: Content in feeds (posts, lists, generators)
  • contentView: Content being viewed directly
  • contentMedia: Media embedded in content

Getting Started

Fetch User Preferences

Get the user’s moderation preferences:
import { Agent } from '@atproto/api'

const agent = new Agent(session)
const prefs = await agent.getPreferences()

// Access moderation preferences
const moderationPrefs = prefs.moderationPrefs
console.log('Adult content enabled:', moderationPrefs.adultContentEnabled)
console.log('Label preferences:', moderationPrefs.labels)
console.log('Subscribed labelers:', moderationPrefs.labelers)

Get Label Definitions

Fetch custom label definitions from labelers:
const labelDefs = await agent.getLabelDefinitions(prefs)

// labelDefs is a map: labelerDid => definitions[]
console.log(labelDefs)
Label definitions should be cached (TTL of 6 hours is recommended) to avoid excessive API calls.

Configure App Labelers

Set the default labelers for your application:
import { Agent } from '@atproto/api'

Agent.configure({
  appLabelers: ['did:plc:your-labeler-did']
})

// Now all Agent instances will use this labeler
const agent = new Agent(session)

Moderating Content

Moderating Posts

import { moderatePost } from '@atproto/api'

const prefs = await agent.getPreferences()
const labelDefs = await agent.getLabelDefinitions(prefs)

const postMod = moderatePost(postView, {
  userDid: agent.did,
  moderationPrefs: prefs.moderationPrefs,
  labelDefs
})

// Check if post should be filtered from feed
if (postMod.ui('contentList').filter) {
  continue // Skip this post
}

// Check if post should be blurred
if (postMod.ui('contentList').blur) {
  const reason = postMod.ui('contentList').blurs[0]
  console.log('Blur reason:', reason)
  
  // Check if user can override the blur
  if (postMod.ui('contentList').noOverride) {
    // Must stay blurred
  }
}

// Check for alerts (warnings)
if (postMod.ui('contentList').alert) {
  for (const alert of postMod.ui('contentList').alerts) {
    console.log('Alert:', alert)
  }
}

// Check for informational notices
if (postMod.ui('contentList').inform) {
  for (const inform of postMod.ui('contentList').informs) {
    console.log('Info:', inform)
  }
}

Moderating Profiles

import { moderateProfile } from '@atproto/api'

const profileMod = moderateProfile(profileView, {
  userDid: agent.did,
  moderationPrefs: prefs.moderationPrefs,
  labelDefs
})

// In a profile list (search results, followers)
if (profileMod.ui('profileList').filter) {
  continue // Skip this profile
}

if (profileMod.ui('profileList').blur) {
  // Blur the profile card
}

// On a profile page
if (profileMod.ui('profileView').alert) {
  // Show warning banner
}

// For avatars specifically
if (profileMod.ui('avatar').blur) {
  // Blur or hide avatar
}

// For display names
if (profileMod.ui('displayName').blur) {
  // Show as "[Blocked]" or similar
}

Other Content Types

import {
  moderateNotification,
  moderateFeedGen,
  moderateUserList
} from '@atproto/api'

// Moderate notifications
const notifMod = moderateNotification(notification, moderationOpts)

// Moderate feed generators
const feedMod = moderateFeedGen(generator, moderationOpts)

// Moderate user lists
const listMod = moderateUserList(list, moderationOpts)

Understanding Moderation Results

ModerationUI Interface

Each moderation check returns a ModerationUI object:
interface ModerationUI {
  filter: boolean      // Should content be removed?
  blur: boolean        // Should content be behind a cover?
  alert: boolean       // Should a warning be shown? (negative)
  inform: boolean      // Should an info notice be shown? (neutral)
  noOverride: boolean  // If blur=true, can user override?
  
  // Details for each flag:
  filters: ModerationCause[]
  blurs: ModerationCause[]
  alerts: ModerationCause[]
  informs: ModerationCause[]
}

Moderation Causes

Each cause explains why a moderation action was taken:
const postMod = moderatePost(post, opts)
const ui = postMod.ui('contentList')

if (ui.blur) {
  for (const cause of ui.blurs) {
    console.log('Type:', cause.type)
    console.log('Priority:', cause.priority)
    
    // Different cause types:
    if (cause.type === 'label') {
      console.log('Label:', cause.labelDef)
    } else if (cause.type === 'blocking') {
      console.log('You blocked this user')
    } else if (cause.type === 'muted') {
      console.log('You muted this user')
    } else if (cause.type === 'mute-word') {
      console.log('Matched mute word:', cause.word)
    }
  }
}

Implementing Moderation UI

Feed Item Example

jsx
import { moderatePost } from '@atproto/api'

function FeedItem({ post, moderationOpts }) {
  const mod = moderatePost(post, moderationOpts)
  const ui = mod.ui('contentList')
  
  // Don't show filtered content
  if (ui.filter) {
    return null
  }
  
  // Content with blur cover
  if (ui.blur) {
    return (
      <BlurCover
        reason={ui.blurs[0]}
        canOverride={!ui.noOverride}
      >
        <PostContent post={post} />
      </BlurCover>
    )
  }
  
  // Content with warnings
  return (
    <div>
      {ui.alert && (
        <div className="warning">
          {ui.alerts.map((alert, i) => (
            <Alert key={i} cause={alert} severity="warning" />
          ))}
        </div>
      )}
      
      {ui.inform && (
        <div className="info">
          {ui.informs.map((inform, i) => (
            <Alert key={i} cause={inform} severity="info" />
          ))}
        </div>
      )}
      
      <PostContent post={post} />
    </div>
  )
}

Profile Page Example

jsx
import { moderateProfile } from '@atproto/api'

function ProfilePage({ profile, moderationOpts }) {
  const mod = moderateProfile(profile, moderationOpts)
  const view = mod.ui('profileView')
  const avatar = mod.ui('avatar')
  const banner = mod.ui('banner')
  const displayName = mod.ui('displayName')
  
  return (
    <div>
      {banner.blur ? (
        <BlurredBanner />
      ) : (
        <img src={profile.banner} />
      )}
      
      <div className="profile-header">
        {avatar.blur ? (
          <BlurredAvatar />
        ) : (
          <img src={profile.avatar} />
        )}
        
        <h1>
          {displayName.blur ? (
            '[Hidden]'
          ) : (
            profile.displayName
          )}
        </h1>
      </div>
      
      {view.alert && (
        <div className="warning-banner">
          {view.alerts.map((alert, i) => (
            <Alert key={i} cause={alert} />
          ))}
        </div>
      )}
      
      <div className="profile-content">
        {view.filter ? (
          <p>This profile is not available.</p>
        ) : (
          <ProfileDetails profile={profile} />
        )}
      </div>
    </div>
  )
}

User Actions

Muting Users

// Mute a user
await agent.mute('did:plc:user123')

// Unmute a user
await agent.unmute('did:plc:user123')

// The moderation system automatically filters muted users

Blocking Users

// Block a user
const { uri } = await agent.app.bsky.graph.block.create(
  { repo: agent.accountDid },
  {
    subject: 'did:plc:user123',
    createdAt: new Date().toISOString()
  }
)

// Unblock (delete the block record)
await agent.app.bsky.graph.block.delete({
  repo: agent.accountDid,
  rkey: new AtUri(uri).rkey
})

Mute Words

Mute words filter content containing specific keywords:
import { AppBskyActorDefs } from '@atproto/api'

// Get current preferences
const prefs = await agent.getPreferences()

// Add a mute word
const muteWords = [
  ...prefs.moderationPrefs.mutedWords,
  {
    value: 'spoiler',
    targets: ['content'] // or ['tag'] for hashtags only
  } satisfies AppBskyActorDefs.MutedWord
]

// Update preferences
await agent.app.bsky.actor.putPreferences({
  preferences: [
    // ... other preferences
    {
      $type: 'app.bsky.actor.defs#mutedWordsPref',
      items: muteWords
    }
  ]
})

Hide Posts

// Get current preferences
const prefs = await agent.getPreferences()

// Add post to hidden list
const hiddenPosts = [
  ...prefs.moderationPrefs.hiddenPosts,
  postUri
]

// Update preferences
await agent.app.bsky.actor.putPreferences({
  preferences: [
    // ... other preferences
    {
      $type: 'app.bsky.actor.defs#hiddenPostsPref',
      items: hiddenPosts
    }
  ]
})

Working with Labelers

What are Labelers?

Labelers are services that apply moderation labels to content. They publish label value definitions that describe custom labels.

Labeler Service Record

interface LabelerService {
  $type: 'app.bsky.labeler.service'
  policies: {
    labelValues: string[]  // List of label values
    labelValueDefinitions?: {
      identifier: string    // e.g., 'rude'
      blurs: 'content' | 'media' | 'none'
      severity: 'inform' | 'alert' | 'none'
      defaultSetting: 'ignore' | 'warn' | 'hide'
      adultOnly: boolean
      locales: Array<{
        lang: string
        name: string
        description: string
      }>
    }[]
  }
  createdAt: string
}

Fetching Labeler Services

// Get single labeler
const { data } = await agent.app.bsky.labeler.getService({
  did: 'did:plc:labeler123'
})

console.log(data.view.policies.labelValues)
console.log(data.view.policies.labelValueDefinitions)

// Get multiple labelers
const { data: services } = await agent.app.bsky.labeler.getServices({
  dids: ['did:plc:labeler1', 'did:plc:labeler2']
})

Interpreting Label Definitions

import { interpretLabelValueDefinitions } from '@atproto/api'

const { data } = await agent.app.bsky.labeler.getService({
  did: 'did:plc:labeler123'
})

const defs = interpretLabelValueDefinitions(
  data.view.policies.labelValueDefinitions
)

// Use in moderation
const labelDefs = {
  'did:plc:labeler123': defs
}

const mod = moderatePost(post, {
  userDid: agent.did,
  moderationPrefs: prefs.moderationPrefs,
  labelDefs
})

Sending Reports

Report content or accounts to a labeler:
// Report an account
await agent
  .withProxy('atproto_labeler', 'did:plc:labeler123')
  .com.atproto.moderation.createReport({
    reasonType: 'com.atproto.moderation.defs#reasonSpam',
    subject: {
      $type: 'com.atproto.admin.defs#repoRef',
      did: 'did:plc:badactor'
    },
    reason: 'This account is spamming'
  })

// Report a post
await agent
  .withProxy('atproto_labeler', 'did:plc:labeler123')
  .com.atproto.moderation.createReport({
    reasonType: 'com.atproto.moderation.defs#reasonViolation',
    subject: {
      $type: 'com.atproto.repo.strongRef',
      uri: 'at://did:plc:user/app.bsky.feed.post/123',
      cid: 'bafyreiabc...'
    },
    reason: 'This post violates community guidelines'
  })
Report reason types:
  • com.atproto.moderation.defs#reasonSpam
  • com.atproto.moderation.defs#reasonViolation
  • com.atproto.moderation.defs#reasonMisleading
  • com.atproto.moderation.defs#reasonSexual
  • com.atproto.moderation.defs#reasonRude
  • com.atproto.moderation.defs#reasonOther

Advanced Usage

Custom Moderation Logic

Extend the moderation system with custom rules:
import { moderatePost, ModerationDecision } from '@atproto/api'

function customModerate(post, opts) {
  // Get base moderation decision
  const baseMod = moderatePost(post, opts)
  
  // Apply custom logic
  if (post.record.text.includes('URGENT')) {
    // Flag as potential spam
    baseMod.addCause({
      type: 'label',
      source: { type: 'user' },
      labelDef: {
        identifier: 'spam',
        blurs: 'content',
        severity: 'alert',
        defaultSetting: 'warn'
      }
    })
  }
  
  return baseMod
}

Merging Moderation Decisions

import { ModerationDecision } from '@atproto/api'

const postMod = moderatePost(post, opts)
const authorMod = moderateProfile(post.author, opts)

// Merge decisions (e.g., for a post with its author)
const combined = ModerationDecision.merge(postMod, authorMod)

// Now check combined moderation
if (combined.ui('contentList').filter) {
  // Either the post or author is filtered
}

Checking Mute Words

import { hasMutedWord } from '@atproto/api'

const mutedWords = prefs.moderationPrefs.mutedWords

// Check if text contains muted words
const hasMuted = hasMutedWord({
  mutedWords,
  text: post.record.text,
  facets: post.record.facets,
  outlineTags: post.record.tags || []
})

if (hasMuted) {
  // Post contains muted words
}

Best Practices

1

Cache label definitions

Label definitions don’t change often. Cache them for ~6 hours.
const CACHE_TTL = 6 * 60 * 60 * 1000 // 6 hours

let labelDefsCache = null
let cacheTime = 0

async function getLabelDefs(agent, prefs) {
  const now = Date.now()
  if (labelDefsCache && now - cacheTime < CACHE_TTL) {
    return labelDefsCache
  }
  
  labelDefsCache = await agent.getLabelDefinitions(prefs)
  cacheTime = now
  return labelDefsCache
}
2

Respect noOverride flag

When noOverride is true, don’t allow users to view the content.
if (ui.blur && ui.noOverride) {
  // Don't show "Show anyway" button
  return <PermanentlyBlurredContent />
}
3

Handle different UI contexts

Use the appropriate context for each situation.
// In a feed
const feedUI = mod.ui('contentList')

// On the post page
const viewUI = mod.ui('contentView')

// For embedded media
const mediaUI = mod.ui('contentMedia')
4

Show informative messages

Explain why content is moderated.
if (ui.blur) {
  const cause = ui.blurs[0]
  return <BlurCover message={`Content hidden: ${cause.label}`} />
}

Common Pitfalls

Forgetting to fetch label definitions: Without label definitions, custom labels won’t work.
// Bad - missing label definitions
const mod = moderatePost(post, {
  userDid: agent.did,
  moderationPrefs: prefs.moderationPrefs
  // labelDefs missing!
})

// Good
const labelDefs = await agent.getLabelDefinitions(prefs)
const mod = moderatePost(post, {
  userDid: agent.did,
  moderationPrefs: prefs.moderationPrefs,
  labelDefs
})
Using the wrong UI context: Different contexts have different rules.
// Bad - using contentView in a feed
const ui = mod.ui('contentView')

// Good - use contentList for feeds
const ui = mod.ui('contentList')
Not handling undefined userDid: The userDid can be undefined if not logged in.
// Good - handle undefined
const mod = moderatePost(post, {
  userDid: agent.did || undefined,
  moderationPrefs: prefs.moderationPrefs,
  labelDefs
})

Next Steps

Using the API

Learn more about the Agent API

Rich Text

Work with mentions and links

OAuth Authentication

Implement OAuth authentication

API Reference

Explore the complete API

Build docs developers (and LLMs) love