Skip to main content

Overview

CrewAI is a framework for orchestrating role-playing, autonomous AI agents. It enables multiple agents with specialized roles to collaborate on complex tasks through structured workflows and task delegation.

When to Use CrewAI

  • Multi-Agent Collaboration: Multiple specialized agents working on different aspects
  • Role-Based Systems: Agents with distinct roles (researcher, analyst, writer)
  • Sequential Workflows: Tasks that depend on previous task outputs
  • Research & Analysis: Complex research requiring multiple perspectives
  • Content Generation: Multi-stage content creation with review processes

Installation

pip install crewai
pip install crewai[tools]  # Include built-in tools

Core Concepts

Agent

Agents are specialized team members with specific roles, goals, and backstories:
from crewai import Agent, LLM
import os

# Create a researcher agent
researcher = Agent(
    role='Senior Researcher',
    goal='Discover groundbreaking technologies',
    backstory=(
        'A curious mind fascinated by cutting-edge innovation '
        'and the potential to change the world, you know '
        'everything about tech.'
    ),
    verbose=True,
    llm=LLM(
        model="nebius/Qwen/Qwen3-235B-A22B",
        api_key=os.getenv("NEBIUS_API_KEY")
    ),
)
Source: starter_ai_agents/crewai_starter/main.py:8-18

Task

Tasks define specific work to be done, with expected outputs and assigned agents:
from crewai import Task

research_task = Task(
    description='Identify the next big trend in AI',
    expected_output='5 paragraphs on the next big AI trend',
    agent=researcher,  # Assign to specific agent
)

Crew

Crews orchestrate agents and tasks, managing the workflow:
from crewai import Crew, Process

tech_crew = Crew(
    agents=[researcher, analyst, writer],
    tasks=[research_task, analysis_task, writing_task],
    process=Process.sequential,  # Tasks execute one after another
)

# Execute the workflow
result = tech_crew.kickoff()
print(result.raw)
Source: starter_ai_agents/crewai_starter/main.py:28-35

LLM Configuration

CrewAI supports multiple model providers via LiteLLM:
from crewai import LLM
import os

llm = LLM(
    model="nebius/Qwen/Qwen3-235B-A22B",
    api_key=os.getenv("NEBIUS_API_KEY")
)

agent = Agent(
    role="Researcher",
    llm=llm,
    # ... other params
)

Common Patterns

Pattern 1: Basic Research Crew

Simple multi-agent research workflow:
from crewai import Agent, Task, Crew, Process, LLM
import os
from dotenv import load_dotenv

load_dotenv()

# Define LLM
llm = LLM(
    model="nebius/Qwen/Qwen3-235B-A22B",
    api_key=os.getenv("NEBIUS_API_KEY")
)

# Create researcher agent
researcher = Agent(
    role='Senior Researcher',
    goal='Discover groundbreaking technologies',
    backstory='Expert in cutting-edge tech trends',
    verbose=True,
    llm=llm,
)

# Create analyst agent
analyst = Agent(
    role='Technology Analyst',
    goal='Analyze and synthesize research findings',
    backstory='Skilled at identifying patterns and insights',
    verbose=True,
    llm=llm,
)

# Define tasks
research_task = Task(
    description='Research the latest developments in AI agents',
    expected_output='Comprehensive research report with key findings',
    agent=researcher,
)

analysis_task = Task(
    description='Analyze the research and identify key trends',
    expected_output='Analysis report with actionable insights',
    agent=analyst,
)

# Create and run crew
crew = Crew(
    agents=[researcher, analyst],
    tasks=[research_task, analysis_task],
    process=Process.sequential,
)

result = crew.kickoff()
print(result.raw)
Based on: starter_ai_agents/crewai_starter/main.py

Pattern 2: RAG-Enhanced Crew

Combine CrewAI with RAG for knowledge-grounded agents:
1

Create Vector Store Tool

from langchain_qdrant import QdrantVectorStore
from langchain_community.embeddings import HuggingFaceEmbeddings
from crewai import Tool
import os

# Setup Qdrant
embeddings = HuggingFaceEmbeddings(
    model_name="sentence-transformers/all-MiniLM-L6-v2"
)

vector_store = QdrantVectorStore(
    url=os.getenv("QDRANT_URL"),
    api_key=os.getenv("QDRANT_API_KEY"),
    collection_name="documents",
    embedding=embeddings
)

# Create search tool
def search_knowledge_base(query: str) -> str:
    """Search the knowledge base for relevant information."""
    docs = vector_store.similarity_search(query, k=5)
    return "\n\n".join([doc.page_content for doc in docs])

qdrant_tool = Tool(
    name="Knowledge Base Search",
    description="Search internal documents for information",
    func=search_knowledge_base
)
2

Create Web Search Tool

from exa_py import Exa

exa_client = Exa(api_key=os.getenv("EXA_API_KEY"))

def web_search(query: str) -> str:
    """Search the web for current information."""
    results = exa_client.search_and_contents(
        query,
        num_results=3,
        text=True
    )
    return "\n\n".join([r.text for r in results.results])

web_tool = Tool(
    name="Web Search",
    description="Search the internet for current information",
    func=web_search
)
3

Create Agents with Tools

researcher = Agent(
    role='Research Agent',
    goal='Find relevant information from multiple sources',
    tools=[qdrant_tool, web_tool],  # Multiple tools
    llm=llm,
    verbose=True,
)

analyst = Agent(
    role='Analysis Agent',
    goal='Synthesize findings into insights',
    tools=[qdrant_tool],  # Can access knowledge base
    llm=llm,
    verbose=True,
)
4

Define Task Flow

research_task = Task(
    description='Research {query} using knowledge base and web search',
    expected_output='Comprehensive research findings',
    agent=researcher,
)

analysis_task = Task(
    description='Analyze research and provide insights on {query}',
    expected_output='Executive summary with key insights',
    agent=analyst,
)

crew = Crew(
    agents=[researcher, analyst],
    tasks=[research_task, analysis_task],
    process=Process.sequential,
)

# Run with input variables
result = crew.kickoff(inputs={"query": "AI agent frameworks"})
Based on: rag_apps/agentic_rag_with_web_search/crews.py

Pattern 3: Multi-Stage Content Workflow

Specialized agents for research → writing → editing:
from crewai import Agent, Task, Crew, Process, LLM

llm = LLM(model="nebius/Qwen/Qwen3-235B-A22B", api_key=os.getenv("NEBIUS_API_KEY"))

# Stage 1: Research
researcher = Agent(
    role='Content Researcher',
    goal='Gather comprehensive information on the topic',
    backstory='Expert at finding authoritative sources and data',
    llm=llm,
)

research_task = Task(
    description='Research {topic} thoroughly',
    expected_output='Research report with sources and key facts',
    agent=researcher,
)

# Stage 2: Writing
writer = Agent(
    role='Content Writer',
    goal='Create engaging, well-structured content',
    backstory='Professional writer with expertise in technical content',
    llm=llm,
)

writing_task = Task(
    description='Write a comprehensive article on {topic} based on research',
    expected_output='Draft article with clear structure and engaging narrative',
    agent=writer,
)

# Stage 3: Editing
editor = Agent(
    role='Senior Editor',
    goal='Polish content to publication quality',
    backstory='Detail-oriented editor ensuring clarity and accuracy',
    llm=llm,
)

editing_task = Task(
    description='Review and refine the article for publication',
    expected_output='Publication-ready article',
    agent=editor,
)

# Execute workflow
content_crew = Crew(
    agents=[researcher, writer, editor],
    tasks=[research_task, writing_task, editing_task],
    process=Process.sequential,
)

result = content_crew.kickoff(inputs={"topic": "Future of AI Agents"})
print(result.raw)

Pattern 4: Parallel Task Execution

Run independent tasks in parallel for efficiency:
from crewai import Crew, Process

# Create multiple independent agents
tech_researcher = Agent(role='Tech Researcher', ...)
market_analyst = Agent(role='Market Analyst', ...)
competitor_analyst = Agent(role='Competitor Analyst', ...)

# Independent tasks
tech_task = Task(description='Research technical capabilities', agent=tech_researcher)
market_task = Task(description='Analyze market size and trends', agent=market_analyst)
competitor_task = Task(description='Identify competitors', agent=competitor_analyst)

# Synthesis task (runs after parallel tasks)
synthesizer = Agent(role='Strategist', ...)
synthesis_task = Task(
    description='Synthesize all findings into strategy',
    agent=synthesizer,
)

# Parallel execution
analysis_crew = Crew(
    agents=[tech_researcher, market_analyst, competitor_analyst, synthesizer],
    tasks=[tech_task, market_task, competitor_task, synthesis_task],
    process=Process.sequential,  # Last task gets all outputs
)

result = analysis_crew.kickoff(inputs={"company": "ExampleCorp"})

Real Examples from Repository

Basic CrewAI Starter

Simple multi-agent research team with sequential workflow

Agentic RAG

RAG system with CrewAI agents, Qdrant vector store, and Exa web search

Configuration

Environment Variables

# .env file
NEBIUS_API_KEY=your_nebius_api_key
OPENAI_API_KEY=your_openai_api_key  # If using OpenAI

# For RAG applications
QDRANT_URL=your_qdrant_url
QDRANT_API_KEY=your_qdrant_api_key
EXA_API_KEY=your_exa_api_key  # For web search

Agent Parameters

role
string
required
The agent’s role in the crew (e.g., “Senior Researcher”)
goal
string
required
What the agent aims to achieve
backstory
string
required
Agent’s background and expertise
llm
LLM
Language model configuration
tools
list[Tool]
default:"[]"
Tools available to the agent
verbose
bool
default:"False"
Enable detailed logging
allow_delegation
bool
default:"True"
Allow agent to delegate tasks to other agents

Task Parameters

description
string
required
What the task involves (supports templating)
expected_output
string
required
What output format is expected
agent
Agent
required
Which agent is responsible for this task
context
list[Task]
Tasks whose outputs should be available to this task

Crew Parameters

agents
list[Agent]
required
List of agents in the crew
tasks
list[Task]
required
List of tasks to execute
process
Process
default:"Process.sequential"
Execution strategy: Process.sequential or Process.hierarchical
verbose
bool
default:"False"
Enable crew-level logging

Best Practices

Give agents specific, well-defined roles:
# ✓ Good: Specific role and expertise
researcher = Agent(
    role='Senior AI Research Analyst',
    goal='Analyze latest AI research papers and identify trends',
    backstory='PhD in AI with 10 years analyzing ML research'
)

# ✗ Bad: Vague role
agent = Agent(
    role='Helper',
    goal='Help with stuff',
    backstory='Knows things'
)
Be specific about what each task should produce:
task = Task(
    description='Research AI agent frameworks',
    expected_output=(
        '1. Executive summary (2-3 paragraphs)\n'
        '2. Comparison table of top 5 frameworks\n'
        '3. Pros and cons for each\n'
        '4. Recommendation with justification'
    ),
    agent=researcher,
)
When tasks depend on previous outputs:
research_task = Task(description='Research topic', ...)
analysis_task = Task(description='Analyze findings', ...)

writing_task = Task(
    description='Write article based on research and analysis',
    context=[research_task, analysis_task],  # Access previous outputs
    agent=writer,
)
Only give agents the tools they need:
# Researcher needs search tools
researcher = Agent(
    role='Researcher',
    tools=[web_search_tool, qdrant_tool],
    ...
)

# Writer doesn't need search tools
writer = Agent(
    role='Writer',
    tools=[],  # Uses previous task outputs
    ...
)

Troubleshooting

If agents seem to work in isolation:
  • Use context=[previous_task] to pass outputs between tasks
  • Ensure process=Process.sequential for dependent tasks
  • Check expected_output is clear so next agent knows what to use
  • Set verbose=True to see what’s happening
  • Check if agents are using tools inefficiently
  • Consider breaking large tasks into smaller ones
  • Review backstory/goal to focus agent behavior
  • Make expected_output more specific
  • Improve agent backstory with relevant expertise
  • Add examples in task description
  • Use stronger LLM models for complex tasks
  • Verify tool description is clear
  • Check tool is assigned to correct agent
  • Ensure task description suggests when to use tool
  • Test tool function works independently first

Next Steps

Build Research Crews

Create multi-agent teams for comprehensive research and analysis

Add RAG Capabilities

Integrate vector stores for knowledge-grounded responses

Content Workflows

Build research → writing → editing pipelines

Custom Tools

Create specialized tools for your agents’ needs

Build docs developers (and LLMs) love