Skip to main content

Prerequisites

Before building FaceNet Android from source, ensure you have:

Android Studio

Android Studio Hedgehog (2023.1.1) or newerDownload from developer.android.com

Android SDK

Android SDK with API level 26+ (Android 8.0)Minimum SDK: 26Target SDK: 34

JDK

Java Development Kit 8 or higherUsually bundled with Android Studio

Git

Git version controlFor cloning the repository

Download options

Option 1: Download pre-built APK

The simplest way to install FaceNet Android is to download the APK:
  1. Visit the GitHub Releases page
  2. Download the latest app-release.apk file
  3. Transfer it to your Android device
  4. Enable installation from unknown sources in your device settings
  5. Tap the APK file to install
The APK is signed with a release keystore. You may see a warning from Google Play Protect since it’s not distributed through the Play Store.

Option 2: Build from source

For developers who want to modify the app or understand its internals:
1

Clone the repository

git clone --depth=1 https://github.com/shubham0204/OnDevice-Face-Recognition-Android
cd OnDevice-Face-Recognition-Android
The --depth=1 flag creates a shallow clone to save bandwidth.
2

Open in Android Studio

  1. Launch Android Studio
  2. Select File > Open
  3. Navigate to the cloned directory and click OK
3

Wait for Gradle sync

Android Studio will automatically:
  • Download Gradle dependencies
  • Download TensorFlow Lite libraries
  • Download ObjectBox plugins
  • Configure the build system
This may take several minutes on the first run.
4

Build and run

  1. Connect an Android device via USB (with USB debugging enabled) or start an emulator
  2. Click the Run button (green play icon) or press Shift + F10
  3. Select your device from the deployment target dialog
The app will compile and install automatically.

Configuration

FaceNet Android offers several configuration options to optimize performance and accuracy.

Choosing the FaceNet model

The app includes two FaceNet models with different embedding dimensions:
  • facenet.tflite: 128-dimensional embeddings (smaller, faster)
  • facenet_512.tflite: 512-dimensional embeddings (more accurate, default)
To switch models:
1

Edit FaceNet.kt

Open app/src/main/java/com/ml/shubham0204/facenet_android/domain/embeddings/FaceNet.kt
2

Change the model path

Modify line 62 to load your preferred model:
// For 128-dimensional embeddings
interpreter = Interpreter(
    FileUtil.loadMappedFile(context, "facenet.tflite"),
    interpreterOptions
)

// For 512-dimensional embeddings (default)
interpreter = Interpreter(
    FileUtil.loadMappedFile(context, "facenet_512.tflite"),
    interpreterOptions
)
3

Update embedding dimensions

Change the embeddingDim value at line 34:
// For facenet.tflite
private val embeddingDim = 128

// For facenet_512.tflite
private val embeddingDim = 512
4

Update the database schema

Open app/src/main/java/com/ml/shubham0204/facenet_android/data/DataModels.kt and modify the @HnswIndex dimensions:
@Entity
data class FaceImageRecord(
    @Id var recordID: Long = 0,
    @Index var personID: Long = 0,
    var personName: String = "",
    // Change dimensions to match your model
    @HnswIndex(dimensions = 128)  // or 512
    var faceEmbedding: FloatArray = floatArrayOf(),
)
5

Clean and rebuild

In Android Studio, select Build > Clean Project, then Build > Rebuild Project to regenerate the ObjectBox schema.
Changing the embedding dimension requires clearing the app data or uninstalling and reinstalling to reset the database schema.
By default, ObjectBox uses HNSW (Hierarchical Navigable Small World) for approximate nearest-neighbor search. This is fast but may miss the true nearest neighbor, especially with larger datasets. To enable precise flat search:
1

Edit FaceDetectionOverlay.kt

Open app/src/main/java/com/ml/shubham0204/facenet_android/presentation/components/FaceDetectionOverlay.kt
2

Set flatSearch to true

Modify line 44:
private val flatSearch: Boolean = true  // Changed from false
3

Rebuild and run

The app will now compute exact cosine similarity for all database entries.
Flat search triggers a linear scan across all records, which is slower but guarantees finding the true nearest neighbor. The implementation uses 4 parallel coroutines to speed up the search.
Performance comparison:
Search methodSpeedAccuracyBest for
HNSW (default)Fast~95% accurateLarge databases (>1000 faces)
Flat searchSlower100% accurateSmall databases (<500 faces)

Choose face detection method

FaceNet Android supports two face detection backends:
  • MLKit (default): Google’s optimized face detection for Android
  • Mediapipe: Cross-platform face detection using BlazeFace
To switch between them:
1

Edit AppModule.kt

Open app/src/main/java/com/ml/shubham0204/facenet_android/di/AppModule.kt
2

Change the isMLKit flag

Modify line 15:
// For MLKit (default)
private var isMLKit = true

// For Mediapipe
private var isMLKit = false
3

Rebuild and run

The app will use the selected face detector for all operations.
Comparison:
DetectorPerformanceAccuracyModel size
MLKitFastHighIncluded in Google Play Services
MediapipeModerateHighBlazeFace short-range model (~1MB)

Adjust recognition threshold

The cosine similarity threshold determines how strict face matching is. The default threshold is 0.3. To modify it:
  1. Open app/src/main/java/com/ml/shubham0204/facenet_android/domain/ImageVectorUseCase.kt
  2. Find line 93 and change the threshold value:
if (distance > 0.4) {  // Increased from 0.3 for stricter matching
    faceRecognitionResults.add(
        FaceRecognitionResult(recognitionResult.personName, boundingBox, spoofResult),
    )
}
Threshold guide:
  • < 0.3: Very strict, may reject valid matches (low false positives)
  • 0.3 - 0.4: Balanced (recommended)
  • > 0.4: Lenient, may accept invalid matches (high false positives)

Build configuration

Key build settings from app/build.gradle.kts:
android {
    namespace = "com.ml.shubham0204.facenet_android"
    compileSdk = 34

    defaultConfig {
        applicationId = "com.ml.shubham0204.facenet_android"
        minSdk = 26  // Android 8.0
        targetSdk = 34
        versionCode = 1
        versionName = "0.0.1"
    }
}

Key dependencies

The app uses these major libraries:
// TensorFlow Lite for model inference
implementation("org.tensorflow.lite:tensorflow-lite:2.x.x")
implementation("org.tensorflow.lite:tensorflow-lite-gpu:2.x.x")

// ObjectBox for vector database
implementation("io.objectbox:objectbox-android:4.0.0")

// MLKit for face detection
implementation("com.google.mlkit:face-detection:16.1.7")

// Mediapipe for face detection
implementation("com.google.mediapipe:tasks-vision:0.x.x")

// CameraX for camera access
implementation("androidx.camera:camera-camera2:1.x.x")
implementation("androidx.camera:camera-lifecycle:1.x.x")

// Koin for dependency injection
implementation("io.insert-koin:koin-android:3.x.x")

GPU acceleration

FaceNet inference can be accelerated using GPU delegates: From FaceNet.kt:48-59:
val interpreterOptions = Interpreter.Options().apply {
    if (useGpu) {
        if (CompatibilityList().isDelegateSupportedOnThisDevice) {
            addDelegate(GpuDelegate(CompatibilityList().bestOptionsForThisDevice))
        }
    } else {
        numThreads = 4  // CPU threads
    }
    useXNNPACK = useXNNPack  // XNNPACK acceleration
    useNNAPI = true  // Android Neural Networks API
}
GPU acceleration is enabled by default. To force CPU inference, modify FaceNet.kt:27:
class FaceNet(
    context: Context,
    useGpu: Boolean = false,  // Changed from true
    useXNNPack: Boolean = true,
)

TFLite model sources

FaceNet models

Both facenet.tflite and facenet_512.tflite are converted from the deepface library using this Python script:
from deepface import DeepFace
from deepface.models.facial_recognition.Facenet import scaling
import tensorflow as tf

model = DeepFace.build_model("Facenet512")
model.model.save("facenet512.keras")

model = tf.keras.models.load_model("facenet512.keras", custom_objects={
    "scaling": scaling
})
converter_fp16 = tf.lite.TFLiteConverter.from_keras_model(model)
converter_fp16.optimizations = [tf.lite.Optimize.DEFAULT]
converter_fp16.target_spec.supported_types = [tf.float16]
tflite_model_fp16 = converter_fp16.convert()

with open("facenet_512.tflite", "wb") as file:
    file.write(tflite_model_fp16)

Anti-spoofing models

The spoof_model_scale TFLite models are converted from PyTorch weights in Silent-Face-Anti-Spoofing via ONNX. Conversion notebook: Liveness_PT_Model_to_TF.ipynb

BlazeFace model

The blaze_face_short_range model is from Mediapipe’s Face Detector solution.

Troubleshooting

Gradle sync fails

Problem: “Could not resolve all dependencies” Solution: Ensure you have a stable internet connection and try:
./gradlew clean
./gradlew build --refresh-dependencies

ObjectBox build errors

Problem: “ObjectBox annotation processor failed” Solution: Clean and rebuild the project:
  1. Build > Clean Project
  2. Build > Rebuild Project
  3. If the error persists, delete the app/build folder and rebuild

App crashes on startup

Problem: “Failed to load model” Solution: Verify that TFLite model files exist in app/src/main/assets/:
  • facenet.tflite
  • facenet_512.tflite
  • spoof_model_scale_1.tflite
  • spoof_model_scale_2.tflite
  • blaze_face_short_range.tflite (if using Mediapipe)

Poor recognition accuracy

Problem: Faces not being recognized correctly Solutions:
  1. Add more images per person (5-10 recommended)
  2. Use well-lit, frontal face photos
  3. Enable flat search for precise matching
  4. Adjust the cosine similarity threshold
  5. Switch to the 512-dimensional model for better accuracy

Next steps

Quick start guide

Learn how to use the app for face recognition

Architecture overview

Understand the technical implementation

Build docs developers (and LLMs) love