Overview
Content filtering endpoints provide granular control over what content is visible on the platform and how it’s classified. These tools allow administrators to mark content as NSFW, hide content from feeds, delete inappropriate images, and perform bulk moderation operations.Mark as NSFW
Flag content as Not Safe For Work. NSFW content is typically hidden by default and requires user opt-in to view.The mint address of the token to mark as NSFW
Mark as Hidden
Completely hide content from public feeds and search results.The ID of the content to hide
- Severe policy violations
- Content pending investigation
- Temporarily remove content while reviewing reports
Mark as Ignored
Mark a report as reviewed but requiring no action. This is useful for false reports or content that doesn’t violate policies.The ID of the report to mark as ignored
Delete Photo
Permanently remove an inappropriate image from a token.The mint address of the token whose photo should be deleted
Bulk NSFW
Mark multiple tokens as NSFW in a single operation.Array of mint addresses to mark as NSFW
- Processing multiple related reports
- Content from same problematic creator
- Batch moderation during platform cleanup
Bulk Hidden
Hide multiple pieces of content in a single operation.Array of content IDs to hide
Bulk Ban
Ban multiple users or pieces of content simultaneously.Array of wallet addresses to ban
Reason for the bulk ban (recommended for audit purposes)
- Shutting down coordinated spam rings
- Banning multiple accounts from same bad actor
- Emergency response to platform attacks
Moderation Logs
Retrieve a history of all moderation actions for audit and review.Number of records to skip for pagination
Maximum number of log entries to return
Filter logs by moderator address (empty string for all moderators)
Unique identifier for the log entry
Type of moderation action taken (e.g., “mark_nsfw”, “hide”, “ban”)
ID or address of the affected content or user
Address of the moderator who performed the action
When the action was performed (ISO 8601 format)
Reason provided for the action (if any)
Content Filtering Strategy
When to Use NSFW
- Adult content that’s not explicitly prohibited
- Provocative but not offensive imagery
- Content that may be inappropriate in some contexts
- Artistic nudity or mature themes
When to Hide
- Clear policy violations
- Scams and fraudulent content
- Severe harassment or hate speech
- Content pending legal review
When to Ban
- Repeat offenders
- Coordinated malicious activity
- Severe terms of service violations
- Criminal activity
Best Practices
- Document actions: Always provide reasons for moderation actions
- Use appropriate severity: Match the action to the severity of violation
- Leverage bulk operations: More efficient for related content
- Review logs regularly: Monitor patterns and moderator activity
- Consistent standards: Apply policies uniformly across all content
- Escalation paths: Have clear procedures for severe violations
- Preserve evidence: Screenshot or save data before deletion
- Response time: Prioritize reports by severity for quick action
Moderation Decision Matrix
| Violation Type | First Offense | Second Offense | Severe/Repeat |
|---|---|---|---|
| NSFW content | Mark NSFW | Mark NSFW | Hide + Warn |
| Spam | Hide | Hide + Warn | Ban |
| Scam | Hide | Ban | Ban + Report |
| Hate speech | Hide | Ban | Ban |
| Copyright | Delete Photo | Hide | Ban |
| Minor issues | Ignore | Mark NSFW | Hide |