Overview
Thetest_model.py script loads your trained model, activates your webcam, extracts facial landmarks from each frame, and displays the predicted emotion on screen.
Prerequisites
Before testing, ensure you have:Completed Model Training
Generated the
model file with satisfactory accuracyA working webcam connected to your computer
Good lighting conditions for face detection
Running the Test
Position yourself
- Face the camera directly
- Ensure good lighting on your face
- Stay within normal distance from the camera
Test emotions
Try different facial expressions (happy, sad) and observe the predictions in real-time.
How It Works
Loading the Trained Model
The script loads the model using pickle:Webcam Initialization
The webcam is initialized using OpenCV with DirectShow backend:cv2.CAP_DSHOW is the DirectShow backend for Windows. On Linux/Mac, the default backend is used automatically.Real-Time Processing Loop
The main loop captures frames, detects faces, and predicts emotions:Understanding the Output
Visual Elements
- Webcam Feed: Live video from your camera
- Facial Landmarks: 68 points drawn on detected faces
- Emotion Label: Text displaying “HAPPY” or “SAD” in the bottom-left corner
- Green Text: Indicates successful emotion detection
Prediction Process
Testing Best Practices
Lighting
Test in well-lit conditions similar to your training images.
Position
Face the camera directly with your full face visible.
Distance
Maintain a normal distance (0.5-1 meter from camera).
Expressions
Try exaggerated expressions initially to verify detection.
Troubleshooting
”No se encontró el modelo entrenado”
Problem: The model file doesn’t exist. Solution: Train the model first:“No se pudo abrir la cámara”
Problem: Webcam cannot be accessed. Solution:- Ensure your webcam is connected and not in use by another application
- Try a different camera index (0, 1, or 2):
- Check camera permissions in your OS settings
- On Linux, ensure your user has access to
/dev/video0
No Face Detection
Problem: The system doesn’t detect your face. Solution:- Improve lighting conditions
- Move closer to the camera
- Ensure your face is directly facing the camera
- Remove glasses or face coverings if possible
- Check that
get_face_landmarks()is working correctly
Wrong Predictions
Problem: The model consistently predicts the wrong emotion. Solution:- Retrain with more data: Add 50-100 more images per emotion
- Verify training accuracy: Check that training accuracy was >80%
- Check lighting: Ensure test conditions match training image lighting
- Try exaggerated expressions: Start with very clear happy/sad expressions
Inconsistent Predictions
Problem: Predictions fluctuate rapidly between emotions. Solution:- This is normal for borderline expressions (neutral faces)
- Try more distinct facial expressions
- Improve model accuracy by adding more training data
- Consider implementing prediction smoothing (averaging last N predictions)
Low Frame Rate
Problem: Video is laggy or slow. Solution:- Reduce webcam resolution
- Optimize
get_face_landmarks()performance - Ensure
static_image_mode=Falsefor faster processing - Use a more powerful CPU
Camera Index Issues (Multiple Cameras)
Problem: Wrong camera is being used. Solution: Modify the camera index in the script:Advanced Testing Tips
Adding Prediction Confidence
Modify the script to show prediction probability:Recording Test Sessions
Save test footage for later analysis:Using Different Backends
For Linux/Mac, remove the DirectShow backend:Performance Metrics
Typical performance on modern hardware:| Metric | Expected Value |
|---|---|
| Frame Rate | 30-40 FPS |
| Detection Latency | <50ms per frame |
| Prediction Time | <10ms per prediction |
| CPU Usage | 20-40% (single core) |
Next Steps
Once you’ve verified the model works correctly:- Improve Accuracy: Add more training data if predictions are inconsistent
- Deploy to Web: Integrate the model into the EmoChat web application
- Add More Emotions: Expand beyond happy/sad to include anger, surprise, etc.
- Optimize Performance: Fine-tune model parameters for faster inference
The testing phase validates that your entire training pipeline works correctly before deploying to production.

