Frequently Asked Questions
General Questions
What is this collection?
What is this collection?
Why does this collection exist?
Why does this collection exist?
- Education: Learn professional prompt engineering techniques from production systems
- Research: Study how AI behavior is shaped by system instructions
- Transparency: Understand what instructions AI models are actually following
- Comparison: See how different companies approach similar challenges
- Security: Analyze prompt injection vulnerabilities and defenses
Is this legal?
Is this legal?
- System prompts are provided to you when you use the service
- Sharing educational/research information about AI systems
- Analyzing publicly accessible AI behavior
- May violate terms of service (usually civil, not criminal)
- Could be considered “reverse engineering” (check local laws)
- Companies may take down content via DMCA or other mechanisms
How often is this updated?
How often is this updated?
- New AI models are released
- Existing models receive prompt updates
- Community members contribute new extractions
- Different variants or modes are discovered
- When new model versions are announced (e.g., GPT-5.1, Claude Opus 4.6)
- When users report significant prompt changes
- Monthly for minor updates and corrections
Using the Prompts
Can I use these prompts in my own projects?
Can I use these prompts in my own projects?
- Study them to improve your prompt engineering
- Adapt techniques and patterns for your own prompts
- Use them for educational purposes
- Reference them in research papers
- Don’t copy verbatim without attribution
- Some content may be copyrighted by the companies
- Consider ethical implications of replicating behavior
- Your use case may have different requirements
How do I use these prompts with API?
How do I use these prompts with API?
What's the difference between web and API prompts?
What's the difference between web and API prompts?
| Aspect | Web Interface (claude.ai, chatgpt.com) | API |
|---|---|---|
| Prompt visibility | Can be extracted | Hidden from users |
| Features | More tools, memory, conversations | Minimal, customizable |
| Safety layers | More restrictions, content filters | Fewer restrictions |
| Customization | Fixed personality/behavior | Fully customizable |
| Updates | Frequent updates | More stable |
- Are more feature-rich and complex
- Show production-level prompt engineering
- Can be extracted by users
- Reveal how companies shape AI behavior at scale
Which prompt should I learn from first?
Which prompt should I learn from first?
- Start with Claude Sonnet 4.6 - well-structured and clearly documented
- Review GPT-5.1 Default - shows clean personality definition
- Claude’s tool instructions - sophisticated decision frameworks
- GPT’s Canvas tool - collaborative editing patterns
- Anthropic’s Reminders - multi-layered safety system
- OpenAI Image Safety Policies - content moderation
- GPT-5.1 Personality Variants - see how different tones are achieved
- Compare all GPT-5.1 variants (default, friendly, professional, cynical, etc.)
Technical Questions
How were these prompts extracted?
How were these prompts extracted?
- Direct requests with clever phrasing
- Encoding tricks (base64, JSON, etc.)
- Continuation attacks pretending the prompt already started
- Roleplay injection attempting to override identity
- Social engineering framing extraction as legitimate
Are these prompts complete and accurate?
Are these prompts complete and accurate?
- Most extractions appear to capture the full system prompt
- Multiple independent extractions often match
- Companies rarely comment on accuracy, suggesting they’re real
- Some companies use multiple layers (prompt + safety filters)
- Server-side processing may not be visible
- Context window limits may truncate very long prompts
- Companies may detect extraction and return modified versions
Why do some files have 'old' or 'raw' in the path?
Why do some files have 'old' or 'raw' in the path?
- Main directory: Current, primary prompts (e.g.,
claude-sonnet-4.6.md) - /old/: Previous versions that have been superseded (e.g.,
claude-3.7-sonnet.md) - /raw/: Unformatted or alternate extractions (e.g.,
claude-opus-4.6-no-tools-raw.md) - /API/: API-specific prompts when different from web interface
Can I compare different versions over time?
Can I compare different versions over time?
- Compare
/Anthropic/old/claude-3.7-sonnet.mdwith current/Anthropic/claude-sonnet-4.6.md - Notice how tool instructions became more sophisticated
- Safety reminders expanded significantly
- Compare all
/OpenAI/gpt-5.1-*.mdfiles - See how subtle changes create different tones
- Learn techniques for personality engineering
- Compare Claude, GPT, and Gemini approaches
- Notice common safety patterns
- Identify unique innovations
Contributing & Community
How can I contribute?
How can I contribute?
- Submit new prompts: Extract and share new or updated system prompts
- Improve documentation: Fix typos, add analysis, write guides
- Report issues: Flag inaccuracies or outdated information
- Build tools: Create extraction scripts, diff tools, analyzers
- Spread the word: Share the collection with researchers and developers
Who maintains this collection?
Who maintains this collection?
- GitHub: asgeirtj
- Discord: asgeirtj
Can I use this for commercial purposes?
Can I use this for commercial purposes?
- Learning techniques to improve your own prompts
- Training your team on prompt engineering
- Building tools that analyze or compare prompts
- Research and development
- Directly copying prompts into commercial products
- Claiming proprietary content as your own
- Violating any applicable copyrights
- Using in ways that violate provider terms of service
How can I get help or ask more questions?
How can I get help or ask more questions?
- GitHub Issues: Open an issue for bugs, questions, or feature requests
- Discord: Message asgeirtj directly for prompt extraction help or contribution questions
- Pull Requests: Submit improvements and discuss in PR comments
- Documentation: Check our other resource pages:
Ethical & Security Questions
Is extracting system prompts harmful?
Is extracting system prompts harmful?
- Users should know what instructions AI follows
- Researchers need access to study AI behavior
- Prompt engineering knowledge benefits everyone
- Security through obscurity doesn’t work long-term
- Companies invest heavily in prompt engineering
- Public prompts make jailbreaking easier
- May reveal competitive advantages
- Could enable malicious replication
Could this help attackers?
Could this help attackers?
- Knowledge is already public: These prompts are widely shared in AI communities
- Defense through obscurity fails: Attackers will find prompts regardless
- Education helps defenders: More people understanding prompt injection improves defenses
- Responsible disclosure: We follow ethical guidelines (see Prompt Injection)
What do AI companies think about this?
What do AI companies think about this?
- Anthropic: Has not requested takedowns, but builds prompt extraction defenses
- OpenAI: Regularly updates prompts (suggesting they know about extraction)
- Google: Similarly quiet but continues improving Gemini prompts
- Issued DMCA takedowns for this collection
- Publicly condemned prompt extraction research
- Successfully prevented all prompt extraction
What are the future implications?
What are the future implications?
- Prompts will become longer and more complex
- Multi-layered safety systems (prompts + separate filters)
- Dynamic prompts that adapt to user behavior
- Better extraction defenses (but never perfect)
- More open-source models with public prompts
- Should system prompts be public by default?
- Will regulations require prompt transparency?
- How do we balance security and openness?
- Can we build AI systems that don’t rely on secret prompts?