Configuration
The application is configured with the following default settings inmicroservice.py:
Deployment Steps
If these files don’t exist, you’ll need to train the models first.
Accessing the Application
Once the server is running, you can access it at: Home Page:API Usage
The application provides a single GET endpoint for emotion prediction:Endpoint
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
text | string | Yes | The text to analyze for emotion and toxicity |
Example Request
Response
The endpoint returns an HTML page displaying:- Emotion: Positive or Negative sentiment
- Toxicity Classification: Yes/No for each category
- Toxic
- Severe Toxic
- Obscene
- Threat
- Insult
- Identity Hate
- Extracted Entities: Countries, people names, dates
- Inappropriate Flag: Overall toxicity indicator
Model Initialization
The application loads models on startup:Production Considerations
Changing Host and Port
Changing Host and Port
To modify the host and port, edit the configuration in
microservice.py:Using a Production Server
Using a Production Server
For production deployments, use a WSGI server like Gunicorn instead of Flask’s development server:
Performance Optimization
Performance Optimization
Consider these optimizations for production:
- Load models once at startup (already implemented)
- Use GPU acceleration if available (modify
torch.device('cpu')totorch.device('cuda')) - Implement request caching for repeated queries
- Add rate limiting to prevent abuse