Response Structure
The API returns an HTML page with the following fields rendered in the template:Input Text
The original text that was analyzed.
Toxicity Results
Whether the text is toxic. Returns “Yes” or “No” based on a threshold of 0.29.
Whether the text contains severely toxic language. Returns “Yes” or “No”.
Whether the text is obscene. Returns “Yes” or “No”.
Whether the text contains threats. Returns “Yes” or “No”.
Whether the text is insulting. Returns “Yes” or “No”.
Whether the text contains identity-based hate speech. Returns “Yes” or “No”.
True if ALL six toxicity categories return “Yes”, otherwise False.
Emotion Analysis
The detected emotion: “Positive” or “Negative”.If all toxicity flags are “Yes”, emotion is set to “Negative”. Otherwise, the emotion classifier predicts the sentiment.
Entity Extraction
Comma-separated list of countries mentioned in the text. Extracted using country detection algorithms.
Comma-separated list of people names identified in the text. Country names and dates are filtered out from this field.
Comma-separated list of dates found in the text.
Array of time ranges found in the text, matching the pattern “HH:MM - HH:MM”.Each element is a tuple of (start_time, end_time).