Skip to main content
Once threat intelligence data has been collected from the configured sources, CyberThreat AI sends it to a large language model for interpretation. The model produces a structured verdict in Spanish, streamed back to the client as Server-Sent Events.

Analysis flow

1

CTI data collection

The appropriate analysis function (analyzeIP, analyzeDomain, or analyzeHash) queries the relevant sources and returns an IocAnalysisResult containing the raw API responses and any SourceWarning values from failed sources.
2

Prompt construction

buildPrompt assembles a structured system prompt that includes the IoC value, its type, the full toolResult JSON, and any warnings. The model is instructed to respond in Spanish and follow a specific Markdown template.
3

OpenRouter request

The prompt is sent to https://openrouter.ai/api/v1/chat/completions as a streaming chat completion request. Temperature is set to 0.2 and max_tokens to 700 to keep responses focused and deterministic.
4

SSE streaming

The response body is read chunk by chunk. Each piece of the verdict is forwarded to the client as an SSE chunk event as soon as it arrives. A meta event is sent first; a done event closes the stream.

Verdict format

The model always responds in Spanish using the following Markdown structure:
**Veredicto:** <Malicioso|Sospechoso|Benigno>
**Confianza:** <Baja|Media|Alta>

**Resumen:** <short summary of the analysis>

**Motivos:**
- <reason>
- <reason>

**Acciones recomendadas:**
- <action>
- <action>
FieldPossible valuesDescription
VeredictoMalicioso, Sospechoso, BenignoOverall threat classification.
ConfianzaBaja, Media, AltaThe model’s confidence in the verdict.
ResumenFree textA short summary of the analysis findings.
MotivosBullet listSpecific observations that support the verdict.
Acciones recomendadasBullet listRecommended response or mitigation actions.
The verdict is always in Spanish regardless of the language you use to interact with the platform. This is enforced directly in the prompt.

SSE event stream

The server sends events using the text/event-stream content type. Each event follows the format event: <name>\ndata: <json>\n\n.
Sent once at the start of the stream, before any content. Contains metadata about the request.
{
  "ioc": "198.51.100.42",
  "type": "IPv4",
  "model": "openrouter/free",
  "warnings": []
}
warnings is omitted when there are no source failures.

Model routing

CyberThreat AI uses OpenRouter as a unified gateway to multiple LLM providers. The model is selected per request from the catalog of allowed models.

Available models

Model IDLabelProvider
openrouter/autoDefault — OpenRouter (Auto)OpenRouter
openrouter/freeOpenRouter (Free)OpenRouter
liquid/lfm-2.5-1.2b-instruct-20260120:freeLiquidAI: LFM2.5-1.2B-Instruct (Free)Liquid
stepfun/step-3.5-flash:freeStepFun: Step 3.5 Flash (Free)StepFun
google/gemma-3-4b-it:freeGoogle: Gemma 3 4B (Free)Google AI Studio

Auto and free routing

When you select openrouter/auto or openrouter/free, OpenRouter itself chooses the actual model to serve the request. The chosen model is reported back in the first streaming chunk via the model field of the parsed payload. CyberThreat AI captures this value and emits a model SSE event so the UI can display the real model name rather than the routing alias.
// src/scripts/core/ctai.ts — simplified
if (!routedModelSent && routedModel) {
    routedModelSent = true
    controller.enqueue(createSseEvent('model', {
        model: buildDisplayModel(model, routedModel)
    }))
}
If the requested model ID is not in the allowed set, the server silently falls back to openrouter/free before sending the request to OpenRouter.

Build docs developers (and LLMs) love