Skip to main content
Agents are the core competitors in Dream Foundry. Each agent is a Python script that implements a specific approach to solving the objective.

Agent Structure

Every agent follows a simple contract:
1

Accept two command-line arguments

  • objective: The task to accomplish (string)
  • output_file: Where to write the result (path)
2

Perform the task

Implement your unique approach to solving the objective
3

Write output to file

Create the output file at the specified path
4

Exit with code 0

Signal success by exiting cleanly

Minimal Agent Example

Here’s the simplest possible agent:
#!/usr/bin/env python3
import sys
from pathlib import Path

def main():
    if len(sys.argv) < 3:
        print("Usage: agent.py <objective> <output_file>", file=sys.stderr)
        sys.exit(1)

    objective = sys.argv[1]
    output_file = Path(sys.argv[2])

    # Your implementation here
    result = f"Completed: {objective}"

    # Write output
    output_file.parent.mkdir(parents=True, exist_ok=True)
    output_file.write_text(result)
    print(f"Output written to {output_file}")

if __name__ == "__main__":
    main()

Real Agent: Agent Alpha

Let’s examine Agent Alpha (“The Speed Demon”) to understand a complete implementation:

Core Strategy

Agent Alpha prioritizes speed over completeness:
# From candidates/agent_alpha.py:24-37
def fetch_quick_events() -> list[dict]:
    """Fast fetch - only check top sources."""
    events = []
    headers = {'User-Agent': 'Mozilla/5.0'}

    print("[Alpha] Quick scan of top AI meetups...")

    # Only check the most popular/reliable meetups for speed
    quick_sources = [
        ("https://www.meetup.com/san-francisco-ai-engineers/", 
         "AI Engineers SF Meetup", "Tuesday, January 27, 2026", 
         "6:00 PM - 8:00 PM", "San Francisco, CA"),
        # ... more sources
    ]

Fast URL Verification

Uses HEAD requests instead of full GET for speed:
# From candidates/agent_alpha.py:40-52
for url, title, date, time_str, location in quick_sources:
    try:
        # Quick HEAD request to verify URL exists
        resp = requests.head(url, timeout=3, allow_redirects=True, headers=headers)
        if resp.status_code in [200, 301, 302]:
            events.append({
                "title": title,
                "date": date,
                "time": time_str,
                "location": location,
                "url": url,
                "event_type": "meetup",
            })
Agent Alpha uses a 3-second timeout for speed, while Agent Beta uses 10 seconds for thoroughness.

Tradeoff: Missing Weekend Events

The speed optimization comes with a cost:
# From candidates/agent_alpha.py:58-59
# NOTE: Alpha is FAST but misses hackathons (doesn't check weekend events)
print("[Alpha] Skipping weekend events for speed")
This is intentional! The tradeoff demonstrates different approaches.

Output Formatting

# From candidates/agent_alpha.py:64-90
def format_discord_post(events: list[dict], objective: str) -> str:
    """Format events as Discord markdown."""
    lines = [
        "# AI Events in the Bay Area",
        "## Week of January 24-31, 2026",
        "",
        f"*Objective: {objective}*",
        "",
    ]

    for event in events:
        lines.extend([
            f"**{event['title']}**",
            f"- Date: {event['date']}",
            f"- Time: {event['time']}",
            f"- Location: {event['location']}",
            f"- Type: {event['event_type']}",
            f"- [RSVP]({event['url']})",
            "",
        ])

    lines.extend([
        "---",
        f"*Found {len(events)} events | Generated by Agent Alpha (Speed Demon)*",
    ])

    return "\n".join(lines)

Real Agent: Agent Beta

Agent Beta (“The Perfectionist”) takes a different approach:

Thorough Verification

Full GET requests with longer timeouts:
# From candidates/agent_beta.py:47-65
for url, title, date, time_str, location, event_type in all_events:
    print(f"[Beta] Checking {title}...")
    time.sleep(0.2)  # Rate limiting

    try:
        resp = requests.get(url, timeout=10, allow_redirects=True, headers=headers)
        if resp.status_code == 200:
            events.append({
                "title": title,
                "date": date,
                "time": time_str,
                "location": location,
                "url": url,
                "event_type": event_type,
            })
            print(f"[Beta] Verified: {title}")

Complete Coverage

Includes weekend events:
# From candidates/agent_beta.py:25-26
# SATURDAY, January 24, 2026 - DAYTONA HACKSPRINT
("https://lu.ma/kga3qtfc", "Daytona HackSprint SF", 
 "Saturday, January 24, 2026", "5:00 PM - 11:00 PM", 
 "San Francisco, CA", "hackathon"),
This is why Beta scores higher on quality despite being slower.

Real Agent: Agent Gamma

Agent Gamma (“The Insider”) uses verified data sources:

Pre-Verified Events

# From candidates/agent_gamma.py:24-26
# VERIFIED EVENTS - All lu.ma links confirmed working
# Source: Cerebral Valley + direct lu.ma verification
verified_events = [
    # SATURDAY, January 24, 2026 - DAYTONA HACKSPRINT (THE USER IS HERE!)
    ("https://lu.ma/kga3qtfc", "Daytona HackSprint SF", 
     "Saturday, January 24, 2026", "9:00 AM - 6:00 PM", 
     "San Francisco, CA", "hackathon"),
    # ... more verified events
]

Source Attribution

# From candidates/agent_gamma.py:63-65
events.append({
    # ... other fields
    "source": "lu.ma" if "lu.ma" in url else "Meetup"
})
This transparency helps with scoring and debugging.

Step-by-Step: Creating Your Own Agent

Let’s create a new agent from scratch.
1

Create the agent file

touch candidates/agent_my_custom.py
chmod +x candidates/agent_my_custom.py
2

Add the shebang and imports

#!/usr/bin/env python3
"""
Agent Custom - Your unique approach

Description of your strategy and tradeoffs.
"""

import sys
from pathlib import Path
import requests
3

Implement your core logic

def fetch_events() -> list[dict]:
    """Your custom implementation."""
    events = []
    
    # Your unique approach here
    # Maybe you use an API, scrape specific sites,
    # or combine multiple data sources
    
    return events
4

Format the output

def format_output(events: list[dict], objective: str) -> str:
    """Format your output to match requirements."""
    lines = [
        "# AI Events in the Bay Area",
        "## Week of January 24-31, 2026",
        "",
        f"*Objective: {objective}*",
        "",
    ]
    
    for event in events:
        lines.extend([
            f"**{event['title']}**",
            f"- Date: {event['date']}",
            f"- Time: {event['time']}",
            f"- Location: {event['location']}",
            f"- Type: {event['event_type']}",
            f"- [RSVP]({event['url']})",
            "",
        ])
    
    return "\n".join(lines)
5

Add the main function

def main():
    if len(sys.argv) < 3:
        print("Usage: agent_custom.py <objective> <output_file>", 
              file=sys.stderr)
        sys.exit(1)

    objective = sys.argv[1]
    output_file = Path(sys.argv[2])

    print(f"[Custom] Starting...")
    events = fetch_events()
    print(f"[Custom] Found {len(events)} events")

    output = format_output(events, objective)

    output_file.parent.mkdir(parents=True, exist_ok=True)
    output_file.write_text(output)
    print(f"[Custom] Done!")

if __name__ == "__main__":
    main()
6

Register in forge.py

# In forge.py, add to CANDIDATES list:
CANDIDATES = [
    # ... existing candidates
    {
        "id": "custom",
        "name": "Agent Custom",
        "description": "Your unique approach",
        "script": "candidates/agent_custom.py",
    },
]
7

Test your agent

python candidates/agent_custom.py \
  "Generate weekly AI events" \
  test_output.md

cat test_output.md
8

Run in the forge

python forge.py "Generate weekly AI events"

Agent Registration

Agents are registered in forge.py:
# From forge.py:87-118
CANDIDATES = [
    {
        "id": "alpha",
        "name": "Agent Alpha",
        "description": "The Speed Demon - Fast scraper, may miss JS content",
        "script": "candidates/agent_alpha.py",
    },
    {
        "id": "beta",
        "name": "Agent Beta",
        "description": "The Perfectionist - Thorough but slow",
        "script": "candidates/agent_beta.py",
    },
    {
        "id": "gamma",
        "name": "Agent Gamma",
        "description": "The Insider - API-based, reliable",
        "script": "candidates/agent_gamma.py",
    },
]
Each agent needs:
  • id: Unique identifier (used in artifacts directory)
  • name: Display name
  • description: Brief explanation of approach/tradeoffs
  • script: Path to the Python script

Best Practices

1. Include Progress Logs

print("[MyAgent] Starting data fetch...")
print(f"[MyAgent] Found {count} items")
print("[MyAgent] Writing output...")
These appear in the Streamlit UI during execution.

2. Handle Errors Gracefully

try:
    resp = requests.get(url, timeout=5)
    if resp.status_code == 200:
        # Process...
except requests.Timeout:
    print(f"[MyAgent] Timeout on {url}, skipping")
except Exception as e:
    print(f"[MyAgent] Error: {e}")

3. Create Parent Directories

output_file.parent.mkdir(parents=True, exist_ok=True)
output_file.write_text(content)

4. Use Appropriate Timeouts

Balance speed vs. reliability:
  • Fast agents: 3-5 second timeouts
  • Thorough agents: 10-15 second timeouts

5. Document Tradeoffs

Explain your approach in the docstring:
"""
Agent Custom - The Hybrid

Combines API data with web scraping.
- Uses lu.ma API for speed (when available)
- Falls back to scraping for missing events
- Slower than pure API but more complete
"""

Testing Your Agent

Unit Test

python candidates/agent_custom.py \
  "Test objective" \
  /tmp/test_output.md

cat /tmp/test_output.md

Integration Test

Run just your agent in the forge:
# Temporarily modify forge.py CANDIDATES list
CANDIDATES = [
    {
        "id": "custom",
        "name": "Agent Custom",
        "description": "Your approach",
        "script": "candidates/agent_custom.py",
    },
]
Then:
python forge.py

Full Competition

Restore all candidates and compete:
python forge.py
cat artifacts/scores.json | jq '.candidates[] | {id, total_score}'

Common Patterns

Pattern: API-First with Fallback

def fetch_events():
    events = []
    
    # Try API first
    try:
        events = fetch_from_api()
        if events:
            return events
    except Exception as e:
        print(f"API failed: {e}, falling back to scraping")
    
    # Fall back to scraping
    return scrape_events()

Pattern: Parallel Requests

import concurrent.futures

def fetch_events():
    urls = ["url1", "url2", "url3"]
    
    with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
        results = executor.map(fetch_single_url, urls)
    
    return [r for r in results if r is not None]

Pattern: Caching

import json
from pathlib import Path

def fetch_events():
    cache_file = Path(".cache/events.json")
    
    # Check cache
    if cache_file.exists():
        age = time.time() - cache_file.stat().st_mtime
        if age < 3600:  # 1 hour
            return json.loads(cache_file.read_text())
    
    # Fetch fresh data
    events = fetch_fresh_events()
    
    # Update cache
    cache_file.parent.mkdir(exist_ok=True)
    cache_file.write_text(json.dumps(events))
    
    return events

Next Steps

Running the Forge

Test your agent in the forge loop

Sandbox Execution

Run your agent in isolated Daytona sandboxes

Build docs developers (and LLMs) love