Skip to main content

Overview

DecipherIt automatically generates 10 frequently asked questions with detailed, well-cited answers from your research. FAQs are created by the same CrewAI research agents that analyze your content, ensuring accuracy and relevance.
FAQs are generated automatically during the research processing workflow - no additional action required.

How It Works

1

Research Context

The FAQ generation task uses the research analysis as context.Context Dependency:
  • Executes after research analysis completes
  • Has access to all research findings
  • Understands key themes and patterns
  • Knows which topics are most important
Implementation: backend/agents/topic_research_agent.py:154-161
2

Question Generation

The Researcher agent identifies 10 relevant questions readers might ask.Question Criteria:
  • Covers different aspects/themes from research
  • Progresses from basic to advanced topics
  • Focuses on likely reader questions
  • Clear and specific wording
  • Unique and non-repetitive
Implementation: backend/config/topic_research/tasks.py:170-206
3

Answer Creation

Detailed answers are written based on research findings.Answer Requirements:
  • Accurate information from research
  • Citations from sources
  • Markdown formatting
  • Concise but comprehensive
  • Professional and accessible tone
  • Relevant quotes when helpful
Implementation: backend/config/topic_research/tasks.py:188-205
4

Quality Validation

Pydantic models ensure proper structure and formatting.Validation:
  • Exactly 10 FAQ items
  • Each has question and answer fields
  • Answers in markdown format
  • Proper citation syntax
  • Clean JSON output

FAQ Structure

Questions are crafted to be:Clear and Specific:
  • “What are the main benefits of microservices architecture?”
  • “How does climate change affect marine ecosystems?”
  • “What evidence supports the effectiveness of remote work?”
Not:
  • Vague: “What is microservices?”
  • Too broad: “Tell me about climate?”
  • Yes/no: “Is remote work good?”
Progressive Complexity: 1-3: Basic understanding questions 4-7: Intermediate detail questions 8-10: Advanced or nuanced questions

Accessing FAQs

1

Navigate to Notebook

Go to your processed notebook from the dashboard.
2

Open FAQ Tab

Click the FAQs tab in the notebook view.
3

Browse Questions

Scroll through the 10 auto-generated questions and answers.
4

Click Citations

Click source links to verify information or read more.
UI Implementation: client/components/notebook/notebook-polling.tsx:179-209

Technical Implementation

FAQ Task Configuration

faq_task = Task(
    description="""Generate 10 frequently asked questions and detailed 
                   answers about "{topic}" based on the research findings.
                   
                   Requirements:
                   - 10 unique and relevant questions
                   - Questions cover different aspects from research
                   - Focus on likely reader questions
                   - Logical progression from basic to advanced
                   - Clear and specific questions
                   
                   Answer Requirements:
                   - Detailed, accurate answers from research
                   - Citations from sources
                   - Markdown formatting
                   - Concise but comprehensive
                   - Professional tone
                   - Include relevant quotes
                   
                   Citation Format:
                   > "Quote text" - [Source Title](url)""",
    expected_output="""JSON object containing array of 10 FAQ items, 
                       each with question and detailed answer supported 
                       by citations from research.""",
    agent=researcher,
    context=[research_task],  # Uses research analysis as context
    max_retries=5,
    output_pydantic=FaqTaskResult
)
Source: backend/config/topic_research/tasks.py:170-206

Output Data Model

class FaqItem(BaseModel):
    question: str
    answer: str  # Markdown-formatted with citations

class FaqTaskResult(BaseModel):
    faq: List[FaqItem]  # Exactly 10 items
Source: backend/models/topic_research_models.py

Crew Integration

research_content_crew = Crew(
    agents=[researcher, content_writer],
    tasks=[research_task, faq_task, content_task],
    verbose=True,
    process=Process.sequential,
    max_rpm=20
)

research_content_crew_result = await research_content_crew.kickoff_async(inputs={
    "topic": topic,
    "scraped_data": scraped_data,
    "current_time": current_time,
})

# Extract FAQ results
faq_result = faq_task.output.pydantic.faq

return {
    "blog_post": research_content_crew_result["blog_post"],
    "title": research_content_crew_result["title"],
    "faq": [faq.model_dump() for faq in faq_result]
}
Source: backend/agents/topic_research_agent.py:164-259

FAQ Display Component

const FaqList = memo(
  ({ faqs }: { faqs: { question: string; answer: string }[] }) => {
    if (!faqs || faqs.length === 0) {
      return (
        <div className="text-muted-foreground">
          No FAQs available for this notebook
        </div>
      );
    }

    return (
      <div className="space-y-6">
        {faqs.map((faq, index) => (
          <div key={index} className="space-y-2">
            <h3 className="text-lg font-semibold">{faq.question}</h3>
            <div className="markdown-container prose prose-sm">
              <ReactMarkdown
                remarkPlugins={[remarkGfm]}
                components={MarkdownComponents}
              >
                {faq.answer}
              </ReactMarkdown>
            </div>
          </div>
        ))}
      </div>
    );
  }
);
Source: client/components/notebook/notebook-polling.tsx:179-209

Quality Features

Research-Grounded

All answers strictly based on your research sources with verifiable citations.

Progressive Depth

Questions organized from fundamental to advanced for natural learning flow.

Rich Formatting

Markdown rendering supports bold, italic, lists, quotes, and links for readability.

Source Transparency

Every claim cited with clickable links to original sources.

Use Cases

Use FAQs for:
  • Quick answers without reading full summary
  • Pre-meeting preparation
  • Overview of key points
  • Refresher on research findings
FAQs help with:
  • Understanding complex topics progressively
  • Identifying knowledge gaps
  • Structured learning path
  • Self-assessment questions
Use FAQs to:
  • Identify common questions for blog posts
  • Structure presentations
  • Create talking points
  • Develop educational materials
FAQs can:
  • Highlight areas needing deeper research
  • Reveal perspective gaps
  • Identify contradictions
  • Suggest follow-up questions

Example FAQ Output

Topic: “Impact of AI on software development”Sample Questions:
  1. What are the primary ways AI is transforming software development?
  2. How do AI code assistants improve developer productivity?
  3. What are the limitations of current AI coding tools?
  4. How is AI affecting software testing practices?
  5. What skills should developers focus on in the AI era?
  6. Are there security concerns with AI-generated code?
  7. How do different AI models compare for coding tasks?
  8. What does research say about AI replacing developers?
  9. How are companies integrating AI into development workflows?
  10. What future developments in AI for coding are predicted?

Markdown Rendering

FAQ answers support full markdown:

Text Formatting

Bold, italic, code, and regular text mixing

Lists

Bullet points and numbered lists for organized information

Quotes

Block quotes for citations and emphasis

Links

Clickable source citations and references

Headings

Subheadings for structured answers

Code Blocks

Syntax highlighting for technical content

Performance Optimizations

Memoized Components

FAQ list component is memoized to prevent unnecessary re-renders.

Parallel Generation

FAQ task runs in parallel with content creation for faster processing.

Structured Output

Pydantic models ensure type-safe, validated output without manual parsing.

Context Reuse

Shares research analysis context, avoiding redundant LLM calls.

Best Practices

Get the Most from FAQs:
  • Read FAQs before the full summary for quick overview
  • Click citation links to verify important information
  • Use FAQ questions to guide follow-up research
  • Compare FAQ answers with Chat responses for consistency
  • Reference FAQs when sharing research insights

Limitations

  • Fixed at 10 questions per notebook
  • Cannot customize question topics
  • Answers limited to research content
  • No follow-up or conversation (use Chat for that)
  • Generated once during processing (not updated)

Comparison with Interactive Q&A

FeatureFAQsInteractive Q&A
GenerationAutomaticOn-demand
QuestionsPre-generated (10)Unlimited custom
ContextFull researchLast 10 messages
SpeedInstant (pre-made)~5-10 seconds
CitationsBuilt-inIncluded
Use CaseQuick referenceSpecific queries

Interactive Q&A

Ask custom questions about your research

AI Summaries

Read comprehensive research summary

Audio Overviews

Listen to podcast-style summary

Build docs developers (and LLMs) love