Welcome to EmoChat
EmoChat is an AI-powered emotion recognition system that analyzes facial expressions in real-time to detect and understand human emotions. Built with OpenCV, scikit-learn, and Google Gemini AI, it provides empathetic feedback and emotional insights.Key Features
Real-time Detection
Analyze emotions through webcam with instant facial landmark detection
Machine Learning
Random Forest classifier trained on facial expression patterns
Privacy-Focused
No data storage - all processing happens locally in real-time
Empathetic AI
Google Gemini AI provides supportive emotional analysis
What You Can Do
EmoChat enables you to:- Detect Emotions: Recognize Happy and Sad emotions from facial expressions
- Track Sessions: Record 30-second emotion tracking sessions
- Get Insights: Receive empathetic feedback from AI based on emotional patterns
- Train Custom Models: Prepare and train your own emotion recognition models
- Build Applications: Integrate emotion detection into web applications
How It Works
Facial Landmark Detection
OpenCV detects faces and extracts 68 facial landmark points using Haar Cascade and LBF models
Feature Normalization
Landmark coordinates are normalized relative to face boundaries for consistency
Use Cases
Mental Health Support
Mental Health Support
Track emotional patterns during therapy sessions or personal reflection exercises
User Experience Research
User Experience Research
Analyze user reactions to products, interfaces, or content in real-time
Education & Training
Education & Training
Learn about computer vision, machine learning, and emotion recognition systems
Accessibility Tools
Accessibility Tools
Build applications that respond to users’ emotional states
Quick Start
Get up and running in minutes:Quickstart Guide
Launch the web app and start detecting emotions
Installation
Install dependencies and set up your environment
Train Your Model
Prepare data and train custom emotion recognition models
API Reference
Explore the core functions and HTTP endpoints
Technology Stack
EmoChat is built with:- OpenCV: Face detection and landmark extraction
- scikit-learn: Random Forest machine learning classifier
- Flask: Web server and REST API
- Google Gemini AI: Empathetic emotional analysis
- JavaScript: Real-time webcam integration
All emotion processing happens locally on your machine. No facial data is stored or transmitted to external servers (except when using the optional Gemini AI analysis feature).
Community & Support
- GitHub Repository: Dani-zm/emotion-recognition-ai
- Issues: Report bugs or request features on GitHub
- Documentation: Comprehensive guides and API reference
