Model overview
The face spoof detection system uses two TensorFlow Lite models that analyze the same face at different scales:spoof_model_scale_2_7.tflite- Analyzes faces at 2.7x scalespoof_model_scale_4_0.tflite- Analyzes faces at 4.0x scale
Model specifications
Input requirements
Input shape:[1, 80, 80, 3] - 80x80 RGB image
Preprocessing:
- Crop face using bounding box scaled by respective scale factor (2.7 or 4.0)
- Resize cropped face to 80x80 pixels
- Convert RGB to BGR color format
- Cast to FLOAT32 data type
Output specifications
Output shape:[1, 3] - 3-class probability distribution
Classes:
- Class 0: Spoof type 1
- Class 1: Real face
- Class 2: Spoof type 2
A face is classified as real only if the final prediction is class 1 after combining outputs from both models.
How it works
The spoof detection algorithm follows these steps:- Dual-scale cropping: The face bounding box is scaled by 2.7x and 4.0x separately, and two cropped images are created at 80x80 resolution.
- Color space conversion: Both images are converted from RGB to BGR format to match the model’s training data.
- Inference: Each model processes its respective scaled input and outputs a 3-element probability vector.
- Softmax normalization: Both output vectors are normalized using softmax.
- Fusion: The normalized outputs are averaged element-wise to produce the final prediction.
-
Classification: The class with the highest average probability determines the result:
- If class 1 has the highest score → Real face
- Otherwise → Spoof detected
Implementation details
The spoof detection is implemented inFaceSpoofDetector.kt with the following key components:
Model loading
Detection result
ThedetectSpoof() method returns a FaceSpoofResult object:
Model source
The original models are from the Silent-Face-Anti-Spoofing repository, which provides PyTorch-based face anti-spoofing using MiniFASNet architecture.Conversion process
The PyTorch model weights were converted to TensorFlow Lite format via ONNX. The conversion process is documented in the project’s Jupyter notebook: Liveness_PT_Model_to_TF.ipynbThe conversion ensures compatibility with the TensorFlow Lite runtime already used for FaceNet, avoiding the need to include additional deep learning frameworks in the app.
Model architecture
MiniFASNet (Fast Anti-Spoofing Network) is a lightweight CNN architecture designed for efficient on-device face anti-spoofing. Key features:- Multi-scale analysis: Uses different scales to capture both local details and global context
- Fourier-based loss: During training, the model is penalized for both classification error and differences between Fourier transforms of intermediate CNN features
- Lightweight design: Optimized for mobile deployment with minimal computational overhead
Performance considerations
- Both models run in parallel during inference
- Total inference time typically ranges from 10-30ms on modern Android devices
- GPU acceleration can be enabled for faster processing
- The dual-scale approach provides better accuracy than single-scale methods