Skip to main content

Overview

Content filtering endpoints provide granular control over what content is visible on the platform and how it’s classified. These tools allow administrators to mark content as NSFW, hide content from feeds, delete inappropriate images, and perform bulk moderation operations.
All content filtering operations require admin authentication and take effect immediately across the platform.

Mark as NSFW

Flag content as Not Safe For Work. NSFW content is typically hidden by default and requires user opt-in to view.
curl -X POST "https://frontend-api-v3.pump.fun/moderation/mark-as-nsfw/{mint}" \
  -H "Authorization: Bearer <your_token>"
mint
string
required
The mint address of the token to mark as NSFW
Response Returns 201 status code on success. The content will be immediately flagged as NSFW and hidden from default feeds.

Mark as Hidden

Completely hide content from public feeds and search results.
curl -X POST "https://frontend-api-v3.pump.fun/moderation/mark-as-hidden/{id}" \
  -H "Authorization: Bearer <your_token>"
id
number
required
The ID of the content to hide
Use Cases
  • Severe policy violations
  • Content pending investigation
  • Temporarily remove content while reviewing reports

Mark as Ignored

Mark a report as reviewed but requiring no action. This is useful for false reports or content that doesn’t violate policies.
curl -X POST "https://frontend-api-v3.pump.fun/moderation/mark-as-ignored/{id}" \
  -H "Authorization: Bearer <your_token>"
id
number
required
The ID of the report to mark as ignored
Ignored reports are still tracked in the system for audit purposes but won’t appear in active report queues.

Delete Photo

Permanently remove an inappropriate image from a token.
curl -X POST "https://frontend-api-v3.pump.fun/moderation/delete-photo/{mint}" \
  -H "Authorization: Bearer <your_token>"
mint
string
required
The mint address of the token whose photo should be deleted
This action is irreversible. The image will be permanently removed from the platform. Consider backing up evidence before deletion if needed for future reference.

Bulk NSFW

Mark multiple tokens as NSFW in a single operation.
curl -X POST "https://frontend-api-v3.pump.fun/moderation/bulk-nsfw" \
  -H "Authorization: Bearer <your_token>" \
  -H "Content-Type: application/json" \
  -d '{
    "mints": [
      "mint1address",
      "mint2address",
      "mint3address"
    ]
  }'
Request Body
mints
array
required
Array of mint addresses to mark as NSFW
Use Cases
  • Processing multiple related reports
  • Content from same problematic creator
  • Batch moderation during platform cleanup

Bulk Hidden

Hide multiple pieces of content in a single operation.
curl -X POST "https://frontend-api-v3.pump.fun/moderation/bulk-hidden" \
  -H "Authorization: Bearer <your_token>" \
  -H "Content-Type: application/json" \
  -d '{
    "ids": [123, 456, 789]
  }'
Request Body
ids
array
required
Array of content IDs to hide

Bulk Ban

Ban multiple users or pieces of content simultaneously.
curl -X POST "https://frontend-api-v3.pump.fun/moderation/bulk-ban" \
  -H "Authorization: Bearer <your_token>" \
  -H "Content-Type: application/json" \
  -d '{
    "addresses": [
      "address1",
      "address2",
      "address3"
    ],
    "reason": "Coordinated spam campaign"
  }'
Request Body
addresses
array
required
Array of wallet addresses to ban
reason
string
Reason for the bulk ban (recommended for audit purposes)
Use Cases
  • Shutting down coordinated spam rings
  • Banning multiple accounts from same bad actor
  • Emergency response to platform attacks

Moderation Logs

Retrieve a history of all moderation actions for audit and review.
curl -X GET "https://frontend-api-v3.pump.fun/moderation/logs?offset=0&limit=50&moderator=" \
  -H "Authorization: Bearer <your_token>"
Query Parameters
offset
number
required
Number of records to skip for pagination
limit
number
required
Maximum number of log entries to return
moderator
string
required
Filter logs by moderator address (empty string for all moderators)
Response Fields Log entries typically include:
id
string
Unique identifier for the log entry
action
string
Type of moderation action taken (e.g., “mark_nsfw”, “hide”, “ban”)
targetId
string
ID or address of the affected content or user
moderator
string
Address of the moderator who performed the action
timestamp
string
When the action was performed (ISO 8601 format)
reason
string
Reason provided for the action (if any)

Content Filtering Strategy

When to Use NSFW

  • Adult content that’s not explicitly prohibited
  • Provocative but not offensive imagery
  • Content that may be inappropriate in some contexts
  • Artistic nudity or mature themes

When to Hide

  • Clear policy violations
  • Scams and fraudulent content
  • Severe harassment or hate speech
  • Content pending legal review

When to Ban

  • Repeat offenders
  • Coordinated malicious activity
  • Severe terms of service violations
  • Criminal activity

Best Practices

  1. Document actions: Always provide reasons for moderation actions
  2. Use appropriate severity: Match the action to the severity of violation
  3. Leverage bulk operations: More efficient for related content
  4. Review logs regularly: Monitor patterns and moderator activity
  5. Consistent standards: Apply policies uniformly across all content
  6. Escalation paths: Have clear procedures for severe violations
  7. Preserve evidence: Screenshot or save data before deletion
  8. Response time: Prioritize reports by severity for quick action

Moderation Decision Matrix

Violation TypeFirst OffenseSecond OffenseSevere/Repeat
NSFW contentMark NSFWMark NSFWHide + Warn
SpamHideHide + WarnBan
ScamHideBanBan + Report
Hate speechHideBanBan
CopyrightDelete PhotoHideBan
Minor issuesIgnoreMark NSFWHide
This matrix is a guideline. Always use judgment based on context, severity, and platform policies.

Build docs developers (and LLMs) love