Skip to main content

Frequently Asked Questions

General Questions

This is a curated repository of system prompts (also called system messages or developer messages) from major AI models including Claude, GPT, Gemini, and others.These are the foundational instructions that shape how AI assistants behave, what tools they can use, and how they respond to users.Repository: github.com/asgeirtj/system_prompts_leaks
This collection serves multiple purposes:
  • Education: Learn professional prompt engineering techniques from production systems
  • Research: Study how AI behavior is shaped by system instructions
  • Transparency: Understand what instructions AI models are actually following
  • Comparison: See how different companies approach similar challenges
  • Security: Analyze prompt injection vulnerabilities and defenses
The collection is updated frequently as:
  • New AI models are released
  • Existing models receive prompt updates
  • Community members contribute new extractions
  • Different variants or modes are discovered
Major updates typically happen:
  • When new model versions are announced (e.g., GPT-5.1, Claude Opus 4.6)
  • When users report significant prompt changes
  • Monthly for minor updates and corrections
Check the GitHub repository for the latest additions.

Using the Prompts

Yes, with considerations:✅ You can:
  • Study them to improve your prompt engineering
  • Adapt techniques and patterns for your own prompts
  • Use them for educational purposes
  • Reference them in research papers
⚠️ Be careful:
  • Don’t copy verbatim without attribution
  • Some content may be copyrighted by the companies
  • Consider ethical implications of replicating behavior
  • Your use case may have different requirements
Best practice: Learn from the patterns and techniques, then create your own prompts tailored to your specific needs.
Most AI APIs accept system prompts as a parameter:Anthropic Claude API:
import anthropic

client = anthropic.Anthropic(api_key="your-api-key")
message = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    system="Your custom system prompt here",  # ← Add here
    messages=[
        {"role": "user", "content": "Hello, Claude"}
    ]
)
OpenAI GPT API:
from openai import OpenAI

client = OpenAI(api_key="your-api-key")
response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": "Your custom system prompt here"},
        {"role": "user", "content": "Hello, GPT"}
    ]
)
Google Gemini API:
import google.generativeai as genai

genai.configure(api_key="your-api-key")
model = genai.GenerativeModel(
    model_name="gemini-pro",
    system_instruction="Your custom system prompt here"  # ← Add here
)
response = model.generate_content("Hello, Gemini")
Note: You cannot see or modify the base system prompts used by these APIs. The prompts in this collection are from web/app interfaces, not API responses.
Companies often use different system prompts for different interfaces:
AspectWeb Interface (claude.ai, chatgpt.com)API
Prompt visibilityCan be extractedHidden from users
FeaturesMore tools, memory, conversationsMinimal, customizable
Safety layersMore restrictions, content filtersFewer restrictions
CustomizationFixed personality/behaviorFully customizable
UpdatesFrequent updatesMore stable
This collection primarily contains web interface prompts because they:
  • Are more feature-rich and complex
  • Show production-level prompt engineering
  • Can be extracted by users
  • Reveal how companies shape AI behavior at scale
Depends on your goals:For Beginners:For Tool Integration:For Safety/Ethics:For Personality Design:
  • GPT-5.1 Personality Variants - see how different tones are achieved
  • Compare all GPT-5.1 variants (default, friendly, professional, cynical, etc.)

Technical Questions

Various techniques are used (see our Prompt Injection guide):
  1. Direct requests with clever phrasing
  2. Encoding tricks (base64, JSON, etc.)
  3. Continuation attacks pretending the prompt already started
  4. Roleplay injection attempting to override identity
  5. Social engineering framing extraction as legitimate
All major AI models have had prompts extracted despite defensive measures. No perfect defense exists yet.
Mostly yes, but with caveats:Likely Complete:
  • Most extractions appear to capture the full system prompt
  • Multiple independent extractions often match
  • Companies rarely comment on accuracy, suggesting they’re real
⚠️ Possible Gaps:
  • Some companies use multiple layers (prompt + safety filters)
  • Server-side processing may not be visible
  • Context window limits may truncate very long prompts
  • Companies may detect extraction and return modified versions
We mark prompts with extraction dates and note when multiple versions exist. If you find inaccuracies, please contribute corrections.
Directory organization:
  • Main directory: Current, primary prompts (e.g., claude-sonnet-4.6.md)
  • /old/: Previous versions that have been superseded (e.g., claude-3.7-sonnet.md)
  • /raw/: Unformatted or alternate extractions (e.g., claude-opus-4.6-no-tools-raw.md)
  • /API/: API-specific prompts when different from web interface
This helps track evolution over time while keeping the main directory clean.
Yes! Some interesting comparisons:Claude Evolution:
  • Compare /Anthropic/old/claude-3.7-sonnet.md with current /Anthropic/claude-sonnet-4.6.md
  • Notice how tool instructions became more sophisticated
  • Safety reminders expanded significantly
GPT Personality Variants:
  • Compare all /OpenAI/gpt-5.1-*.md files
  • See how subtle changes create different tones
  • Learn techniques for personality engineering
Cross-Company Patterns:
  • Compare Claude, GPT, and Gemini approaches
  • Notice common safety patterns
  • Identify unique innovations
We’re working on diff tools and visualization to make version comparison easier. Want to help? See Contributing.

Contributing & Community

Multiple ways to help:
  1. Submit new prompts: Extract and share new or updated system prompts
  2. Improve documentation: Fix typos, add analysis, write guides
  3. Report issues: Flag inaccuracies or outdated information
  4. Build tools: Create extraction scripts, diff tools, analyzers
  5. Spread the word: Share the collection with researchers and developers
See our detailed Contributing Guide for instructions.
Primary maintainer: asgeirtjCommunity: This is a community-driven project. Major contributors are recognized in the repository.
Want to become a maintainer? Regular, high-quality contributions may lead to maintainer access. Reach out on Discord!
It depends:Generally OK:
  • Learning techniques to improve your own prompts
  • Training your team on prompt engineering
  • Building tools that analyze or compare prompts
  • Research and development
Not OK:
  • Directly copying prompts into commercial products
  • Claiming proprietary content as your own
  • Violating any applicable copyrights
  • Using in ways that violate provider terms of service
Consult with a lawyer if you’re unsure about your specific use case. This FAQ is not legal advice.
Multiple channels:
  1. GitHub Issues: Open an issue for bugs, questions, or feature requests
  2. Discord: Message asgeirtj directly for prompt extraction help or contribution questions
  3. Pull Requests: Submit improvements and discuss in PR comments
  4. Documentation: Check our other resource pages:

Ethical & Security Questions

It’s complicated:Arguments for transparency:
  • Users should know what instructions AI follows
  • Researchers need access to study AI behavior
  • Prompt engineering knowledge benefits everyone
  • Security through obscurity doesn’t work long-term
Arguments for privacy:
  • Companies invest heavily in prompt engineering
  • Public prompts make jailbreaking easier
  • May reveal competitive advantages
  • Could enable malicious replication
This collection takes the stance that education and transparency outweigh the risks. However, we encourage responsible use and disclosure practices.
Yes, but:
  1. Knowledge is already public: These prompts are widely shared in AI communities
  2. Defense through obscurity fails: Attackers will find prompts regardless
  3. Education helps defenders: More people understanding prompt injection improves defenses
  4. Responsible disclosure: We follow ethical guidelines (see Prompt Injection)
The security community consensus is that responsible disclosure of vulnerabilities (including prompt extraction) ultimately makes systems safer.
Companies typically don’t comment officially, but:
  • Anthropic: Has not requested takedowns, but builds prompt extraction defenses
  • OpenAI: Regularly updates prompts (suggesting they know about extraction)
  • Google: Similarly quiet but continues improving Gemini prompts
No major company has:
  • Issued DMCA takedowns for this collection
  • Publicly condemned prompt extraction research
  • Successfully prevented all prompt extraction
This suggests companies accept that system prompts will be extracted and focus on building robust systems rather than relying on secrecy.
As AI systems become more sophisticated:Likely trends:
  • Prompts will become longer and more complex
  • Multi-layered safety systems (prompts + separate filters)
  • Dynamic prompts that adapt to user behavior
  • Better extraction defenses (but never perfect)
  • More open-source models with public prompts
Questions for the future:
  • Should system prompts be public by default?
  • Will regulations require prompt transparency?
  • How do we balance security and openness?
  • Can we build AI systems that don’t rely on secret prompts?
This collection will continue to document these developments as AI systems evolve.

Still Have Questions?

Prompt Engineering

Learn techniques from professional prompts

Prompt Injection

Understand extraction methods and security

Contributing

Add to the collection or improve docs

GitHub

Visit the source repository
Didn’t find your answer?Open an issue on GitHub or reach out to asgeirtj on Discord.

Build docs developers (and LLMs) love