Skip to main content
The World Brief provides an AI-synthesized summary of the most significant global developments. Instead of manually scanning hundreds of headlines, you get a concise intelligence digest generated by language models.

How It Works

World Monitor uses a 4-tier provider fallback chain to ensure you always get a summary, even when cloud services are unavailable:
1

Tier 1: Local LLM (Ollama/LM Studio)

If you’ve configured a local inference endpoint, World Monitor attempts generation first on your machine. This keeps your data private and eliminates API costs.Timeout: 5 secondsModels supported: Any OpenAI-compatible endpoint (Ollama, LM Studio, llama.cpp server, vLLM)
2

Tier 2: Groq (Cloud)

Fast cloud inference using Llama 3.1 8B at temperature 0.3. Requires a Groq API key.Timeout: 5 seconds
3

Tier 3: OpenRouter (Cloud)

Multi-model fallback for additional redundancy. Requires an OpenRouter API key.Timeout: 5 seconds
4

Tier 4: Browser-side T5 (Transformers.js)

If all API providers fail, World Monitor falls back to a small summarization model (T5) running entirely in your browser using WebAssembly. No network required.Note: This is the slowest option but guarantees you’ll always get a summary.

Deduplication & Caching

To optimize performance and reduce API costs:
  • Content deduplication — Headlines are compared using word-overlap similarity (Jaccard). Near-duplicates (>60% overlap) are merged before sending to the LLM, reducing prompt size by 20–40%
  • Redis caching — Summaries are cached for 24 hours with a composite key based on mode, variant, language, and content hash. If 1,000 users view the same headlines, only one LLM call is made
  • Variant-aware prompts — The system prompt adapts to your dashboard variant:
    • World Monitor: emphasizes geopolitical events, conflict escalation, diplomatic shifts
    • Tech Monitor: focuses on funding rounds, AI breakthroughs, startup news
    • Finance Monitor: highlights market movements, central bank signals, economic data

Multilingual Output

When you change the UI language in World Monitor, the World Brief automatically generates summaries in that language. The LLM prompt instructs the model to output in your selected language:
Summarize these global developments in English:
- Ukraine conflict escalates...
- Fed signals rate cut...

Configuring Local LLM

To use Ollama or LM Studio for local generation:
  1. Open Settings (Cmd+, or click the gear icon)
  2. Navigate to the LLMs tab
  3. Enter your local endpoint URL:
    • Ollama default: http://localhost:11434
    • LM Studio default: http://localhost:1234
  4. Click Verify & Save
World Monitor will automatically discover available models and populate a dropdown. Embedding-only models are filtered out.
If model discovery fails, a manual text input appears as fallback. Default fallback model: llama3.1:8b

Progress Indicators

While generating a summary, World Monitor displays which provider tier is being attempted:
  • “Generating summary (Ollama)…” — Trying local endpoint
  • “Generating summary (Groq)…” — Trying cloud inference
  • “Generating summary (OpenRouter)…” — Trying multi-model fallback
  • “Generating summary (Browser)…” — Using local browser model
If a tier fails or times out, the UI seamlessly advances to the next tier without user interaction.

Privacy Considerations

100% private — No data leaves your machine
No API keys required
No internet required (after model download)

API Key Management

API keys for cloud providers are stored securely:
  • Desktop app: OS keychain (macOS Keychain, Windows Credential Manager)
  • Web app: Browser localStorage (encrypted in transit via HTTPS)
Keys are never logged or transmitted to World Monitor servers — they go directly to Groq/OpenRouter.
Never share your API keys. If compromised, regenerate them immediately in your provider dashboard.

Troubleshooting

This means all three API tiers are timing out or returning errors:
  1. Check your Settings → LLMs tab to verify Ollama/Groq/OpenRouter are configured
  2. Test your Ollama endpoint: curl http://localhost:11434/api/tags
  3. Verify API keys are valid (click Verify & Save in Settings)
  4. Check browser console for detailed error messages
Ensure:
  • Ollama is running: ollama serve
  • The endpoint URL is correct in Settings (default: http://localhost:11434)
  • At least one chat model is installed: ollama list
  • The model isn’t embedding-only (e.g., nomic-embed-text won’t work)
The UI language setting controls summary language. Change it via the language selector in the header:
  1. Click the language flag icon (e.g., 🇬🇧)
  2. Select your preferred language
  3. The page will reload with the new language
  4. Generate a new World Brief — it will now use your language
Summaries are cached for 24 hours. To force regeneration:
  • Wait for new headlines to arrive (cache key includes content hash)
  • Or clear your browser cache and reload

Build docs developers (and LLMs) love