Skip to main content
This guide covers how to deploy and run the Flask-based emotion prediction microservice.

Configuration

The application is configured with the following default settings in microservice.py:
PORT = 3200
HOST = "127.0.0.1"

Deployment Steps

1
Create Virtual Environment
2
First, create a virtual environment for the application:
3
python3 -m venv "text_prediction"
4
Activate Virtual Environment
5
Activate the virtual environment:
6
# On Linux/Mac
source text_prediction/bin/activate

# On Windows
text_prediction\Scripts\activate
7
Install Dependencies
8
Install all required packages:
9
pip3 install -r requirements.txt
10
Verify Model Files
11
Ensure all trained model files exist in the models/ directory:
12
  • emotion_classifier.model
  • model_26_87.12.pth
  • vectorizer2.pickle
  • word_dict.json
  • 13
    If these files don’t exist, you’ll need to train the models first.
    14
    Start the Flask Server
    15
    Run the microservice:
    16
    python3 microservice.py
    
    17
    You should see:
    18
    Microserver running in port 3200
    

    Accessing the Application

    Once the server is running, you can access it at: Home Page:
    http://127.0.0.1:3200/
    
    API Endpoint:
    http://127.0.0.1:3200/textbased_emotion?text=Your%20text%20here
    

    API Usage

    The application provides a single GET endpoint for emotion prediction:

    Endpoint

    GET /textbased_emotion
    

    Parameters

    ParameterTypeRequiredDescription
    textstringYesThe text to analyze for emotion and toxicity

    Example Request

    curl "http://127.0.0.1:3200/textbased_emotion?text=I%20am%20so%20happy%20today!"
    

    Response

    The endpoint returns an HTML page displaying:
    • Emotion: Positive or Negative sentiment
    • Toxicity Classification: Yes/No for each category
      • Toxic
      • Severe Toxic
      • Obscene
      • Threat
      • Insult
      • Identity Hate
    • Extracted Entities: Countries, people names, dates
    • Inappropriate Flag: Overall toxicity indicator

    Model Initialization

    The application loads models on startup:
    # Load vectorizer and emotion classifier
    vectorizer = pickle.load(open('models/vectorizer2.pickle', 'rb'))
    emotion_classifier = pickle.load(open('models/emotion_classifier.model', 'rb'))
    
    # Load word dictionary
    with open('models/word_dict.json') as json_file:
        word_dict = json.load(json_file)
    
    # Initialize neural network model
    embedding = nn.Embedding(len(word_dict), 10)
    model = base_line(10, 6)
    model.load_state_dict(torch.load('models/model_26_87.12.pth', 
                                      map_location=torch.device('cpu')))
    model.eval()
    

    Production Considerations

    To modify the host and port, edit the configuration in microservice.py:
    PORT = 8080  # Your desired port
    HOST = "0.0.0.0"  # Listen on all interfaces
    
    For production deployments, use a WSGI server like Gunicorn instead of Flask’s development server:
    pip install gunicorn
    gunicorn -w 4 -b 127.0.0.1:3200 microservice:app
    
    Consider these optimizations for production:
    • Load models once at startup (already implemented)
    • Use GPU acceleration if available (modify torch.device('cpu') to torch.device('cuda'))
    • Implement request caching for repeated queries
    • Add rate limiting to prevent abuse

    Next Steps

    Learn about the entity extraction features available in the API.

    Build docs developers (and LLMs) love