Endpoints
POST /api/v1/research/optimize_topic— Rephrase or refine a research topic using an LLM.WS /api/v1/research/run— Run the full research pipeline over WebSocket.
POST /api/v1/research/optimize_topic
Rephrase or iteratively refine a research topic. On the first call (iteration: 0) it reformulates the raw topic. On subsequent calls it incorporates feedback.
Request body
The research topic or, on iteration > 0, user feedback on the previous result.
Iteration index.
0 for the first call; increment for each feedback round.The result object returned by the previous call. Required when
iteration > 0.Knowledge base to use for context.
Response
The reformulated topic string.
Present only if an error occurred.
WS /api/v1/research/run
Run the full deep research pipeline. The server streams log lines, progress events, and the final report over the connection.
Initial message
Send this JSON object immediately after the connection opens.The research topic.
Knowledge base to use for RAG queries.
Research depth. One of
quick, medium, deep, or auto.| Mode | Subtopics | Max iterations per topic | Mode |
|---|---|---|---|
quick | 2 | 2 | Fixed |
medium | 5 | 4 | Fixed |
deep | 8 | 7 | Fixed |
auto | Up to 8 | 6 | Flexible |
Tools to enable during research. Any combination of
"RAG", "Paper", and "Web". Code execution is always enabled.Skip the internal topic rephrasing step.
Streaming messages
Message type. One of:
task_id, status, log, progress, result, error.Returned in the
task_id message. Unique identifier for this run.Returned in
status, log, and error messages.Returned in
status (started) and result messages. Identifies this research run.Returned in the
result message. Full Markdown content of the research report.Returned in the
result message. Pipeline metadata including sources and statistics.Example
The
log messages are captured stdout from the pipeline and may include rich formatting that has been stripped. Use them for debugging or live status display.