Implementing the AT Protocol moderation system in your application
The AT Protocol moderation system helps you handle content safety across various contexts. It combines moderator labels, user preferences, muting, blocking, and custom rules into a unified API.
import { Agent } from '@atproto/api'Agent.configure({ appLabelers: ['did:plc:your-labeler-did']})// Now all Agent instances will use this labelerconst agent = new Agent(session)
import { moderatePost } from '@atproto/api'const prefs = await agent.getPreferences()const labelDefs = await agent.getLabelDefinitions(prefs)const postMod = moderatePost(postView, { userDid: agent.did, moderationPrefs: prefs.moderationPrefs, labelDefs})// Check if post should be filtered from feedif (postMod.ui('contentList').filter) { continue // Skip this post}// Check if post should be blurredif (postMod.ui('contentList').blur) { const reason = postMod.ui('contentList').blurs[0] console.log('Blur reason:', reason) // Check if user can override the blur if (postMod.ui('contentList').noOverride) { // Must stay blurred }}// Check for alerts (warnings)if (postMod.ui('contentList').alert) { for (const alert of postMod.ui('contentList').alerts) { console.log('Alert:', alert) }}// Check for informational noticesif (postMod.ui('contentList').inform) { for (const inform of postMod.ui('contentList').informs) { console.log('Info:', inform) }}
import { moderateProfile } from '@atproto/api'const profileMod = moderateProfile(profileView, { userDid: agent.did, moderationPrefs: prefs.moderationPrefs, labelDefs})// In a profile list (search results, followers)if (profileMod.ui('profileList').filter) { continue // Skip this profile}if (profileMod.ui('profileList').blur) { // Blur the profile card}// On a profile pageif (profileMod.ui('profileView').alert) { // Show warning banner}// For avatars specificallyif (profileMod.ui('avatar').blur) { // Blur or hide avatar}// For display namesif (profileMod.ui('displayName').blur) { // Show as "[Blocked]" or similar}
Each moderation check returns a ModerationUI object:
interface ModerationUI { filter: boolean // Should content be removed? blur: boolean // Should content be behind a cover? alert: boolean // Should a warning be shown? (negative) inform: boolean // Should an info notice be shown? (neutral) noOverride: boolean // If blur=true, can user override? // Details for each flag: filters: ModerationCause[] blurs: ModerationCause[] alerts: ModerationCause[] informs: ModerationCause[]}
// Mute a userawait agent.mute('did:plc:user123')// Unmute a userawait agent.unmute('did:plc:user123')// The moderation system automatically filters muted users
import { ModerationDecision } from '@atproto/api'const postMod = moderatePost(post, opts)const authorMod = moderateProfile(post.author, opts)// Merge decisions (e.g., for a post with its author)const combined = ModerationDecision.merge(postMod, authorMod)// Now check combined moderationif (combined.ui('contentList').filter) { // Either the post or author is filtered}
Label definitions don’t change often. Cache them for ~6 hours.
const CACHE_TTL = 6 * 60 * 60 * 1000 // 6 hourslet labelDefsCache = nulllet cacheTime = 0async function getLabelDefs(agent, prefs) { const now = Date.now() if (labelDefsCache && now - cacheTime < CACHE_TTL) { return labelDefsCache } labelDefsCache = await agent.getLabelDefinitions(prefs) cacheTime = now return labelDefsCache}
2
Respect noOverride flag
When noOverride is true, don’t allow users to view the content.
if (ui.blur && ui.noOverride) { // Don't show "Show anyway" button return <PermanentlyBlurredContent />}
3
Handle different UI contexts
Use the appropriate context for each situation.
// In a feedconst feedUI = mod.ui('contentList')// On the post pageconst viewUI = mod.ui('contentView')// For embedded mediaconst mediaUI = mod.ui('contentMedia')
4
Show informative messages
Explain why content is moderated.
if (ui.blur) { const cause = ui.blurs[0] return <BlurCover message={`Content hidden: ${cause.label}`} />}