/generate— Generate custom questions based on a knowledge point and type specification./mimic— Parse an existing exam paper and generate similar questions.
WS /api/v1/question/generate
Generate one or more practice questions from a knowledge base. The coordinator agent iterates to produce high-quality, validated questions.
Initial message
Specification for the question(s) to generate.
Knowledge base to use for grounding the questions.
Number of questions to generate.
Streaming messages
Message type. One of:
task_id, status, log, question, token_stats, batch_summary, complete, error.Returned in the
task_id message.Returned in
status, log, and error messages.Returned in
question messages as each question is produced.Returned alongside
question messages. Quality validation result from the agent.Returned in
token_stats messages. LLM usage statistics.Returned in
batch_summary. Number of questions requested.Returned in
batch_summary. Number of questions successfully generated.Returned in
batch_summary. Number of failed generation attempts.Example
WS /api/v1/question/mimic
Parse an exam paper PDF and generate new questions that mimic its style and content.
Initial message
The message format depends on themode field.
Either
"upload" (send a PDF directly) or "parsed" (use a pre-parsed directory).Knowledge base to use for context.
Maximum number of questions to generate. Defaults to all questions found in the paper.
mode: "upload":
Base64-encoded PDF content.
Original filename including the
.pdf extension.mode: "parsed":
Name of the pre-parsed paper directory on the server.
Streaming messages
Message type. One of:
status, log, question, complete, error.Returned in
status messages. Current pipeline stage, e.g. "init", "upload", "parsing", "processing".Returned in
status, log, and error messages.Example (upload mode)
The
mimic endpoint uses MinerU to parse the PDF. Parsing time depends on the document length. For large exam papers, expect 30–120 seconds before questions start streaming.