Posture detection is powered by TensorFlow.js and the MoveNet model, which detects human body keypoints in real-time from webcam video. The system tracks the user’s right eye position and compares it to a baseline “good posture” position.
SINGLEPOSE_LIGHTNING is optimized for speed over accuracy, making it ideal for real-time browser-based pose detection. It detects 17 keypoints including eyes, nose, shoulders, elbows, wrists, hips, knees, and ankles.
Analyzes detected keypoints and determines if posture is good or bad:
const handlePose = async (poses: { keypoints: { y: number }[] }[]) => { try { // Track right eye position (keypoint index 2) let rightEyePosition = poses[0].keypoints[2].y; currentPosturePosition.current = rightEyePosition; if (!rightEyePosition) return; // Set baseline on first detection if (GOOD_POSTURE_POSITION.current == null) { handlePosture({ baseline: currentPosturePosition.current }); } // Check if current position exceeds deviation threshold if ( Math.abs( currentPosturePosition.current - GOOD_POSTURE_POSITION.current ) > GOOD_POSTURE_DEVIATION.current ) { handlePosture({ posture: 'bad' }); } // Within acceptable range if ( Math.abs( currentPosturePosition.current - GOOD_POSTURE_POSITION.current ) < GOOD_POSTURE_DEVIATION.current ) { handlePosture({ posture: 'good' }); } } catch (error) { console.error(error); }};
Detection Logic:
Uses keypoint index 2 (right eye) for posture tracking
First detection sets the baseline “good posture” position
Calculates absolute difference between current and baseline position
Default deviation threshold: 25 pixels
Sends posture status (‘good’ or ‘bad’) to background script
The right eye was chosen as the tracking point because it’s reliably detected by MoveNet and provides a stable reference for head position. As users slouch, their head typically moves down, increasing the Y-coordinate of the eye position.
drawGoodPostureHeight() - Draw baseline and current position lines
Renders body keypoints with color-coded posture feedback:
export function drawKeypoints( keypoints: any, ctx: any, currentGoodPostureHeight: any) { const currentPostureHeight = keypoints[2].y; const delta = currentPostureHeight - currentGoodPostureHeight; const keypointInd = poseDetection.util.getKeypointIndexBySide( poseDetection.SupportedModels.MoveNet ); // Green for good posture, red for bad ctx.fillStyle = 'rgba(0, 255, 0, 0.9)'; if (delta > 25 || delta < -25) { ctx.fillStyle = 'rgba(255, 0, 0, 0.9)'; } // Draw middle, left, and right keypoints for (const i of keypointInd.middle) { drawKeypoint(keypoints[i], ctx); } for (const i of keypointInd.left) { drawKeypoint(keypoints[i], ctx); } for (const i of keypointInd.right) { drawKeypoint(keypoints[i], ctx); }}
Features:
Color changes based on posture deviation
Green = good posture
Red = deviation > 25 pixels
Draws all detected keypoints as circles (4px radius)
Visualizes baseline and current eye position with connecting rectangle:
export function drawGoodPostureHeight( keypoints: any, ctx: any, currentGoodPostureHeight: number) { const currentPostureHeight = keypoints[2].y; const delta = currentPostureHeight - currentGoodPostureHeight; // Draw baseline (good posture line) ctx.strokeStyle = '#fff'; ctx.lineWidth = 1; ctx.beginPath(); ctx.moveTo(0, currentGoodPostureHeight); ctx.lineTo(600, currentGoodPostureHeight); ctx.stroke(); // Draw current position line ctx.beginPath(); ctx.moveTo(0, currentPostureHeight); ctx.lineTo(800, currentPostureHeight); ctx.stroke(); // Fill area between lines (green/red based on deviation) ctx.fillStyle = 'rgba(0, 255, 0, 0.5)'; if (delta > 25 || delta < -25) { ctx.fillStyle = 'rgba(255, 0, 0, 0.5)'; } ctx.fillRect(0, currentGoodPostureHeight, 800, delta);}
Each keypoint includes x, y coordinates and a score (confidence value 0-1). Only keypoints with score >= 0.3 are considered reliable and drawn on the canvas.