Skip to main content

Overview

This guide walks you through integrating the FAD SDK into your TypeScript or JavaScript project using the npm package. The FAD SDK provides powerful biometric and document capture capabilities for web applications.
The FAD SDK v3.0.0 requires Node.js 16+ and modern browser support for WebAssembly and MediaDevices APIs.

Prerequisites

Before you begin, ensure your environment meets these requirements:
  • Node.js: Version 16 or higher
  • Package Manager: npm or yarn
  • Browser Support: Modern browsers with WebAssembly support
  • Hardware: See hardware requirements for camera-based modules

Installation

1

Install the SDK package

Install the FAD SDK from npm using your preferred package manager:
npm install @fad-producto/fad-sdk
2

Import the SDK

Import the FadSDK class in your TypeScript or JavaScript file:
import FadSDK from '@fad-producto/fad-sdk';
3

Initialize the SDK instance

Create a new SDK instance with your authentication token and environment options:
const options = {
  environment: FadSDK.getFadEnvironments().UATHA,
};

const FAD_SDK = new FadSDK(TOKEN, options);
Available environments: UATHA, PRODUCTION. Use UATHA for development and testing.

Basic Usage Example

Here’s a complete example showing how to implement the Liveness-3D module with proper error handling:
import FadSDK from '@fad-producto/fad-sdk';
import { CREDENTIALS, CONFIGURATION, TOKEN } from './constants';

async function initProcess() {
  const options = {
    environment: FadSDK.getFadEnvironments().UATHA,
  };

  const FAD_SDK = new FadSDK(TOKEN, options);
  
  try {
    const facetecResponse = await FAD_SDK.startFacetec(
      CREDENTIALS, 
      CONFIGURATION
    );

    // Process completed successfully
    console.log('Process completed', facetecResponse);
    
    // Access the response data
    const img = facetecResponse.data.auditTrail[0];
    const imgLowQuality = facetecResponse.data.lowQualityAuditTrail[0];
    const faceScan = facetecResponse.data.faceScan;

    // Display results
    const containerResult = document.getElementById('container-result');
    const imageId = document.getElementById('image-id') as HTMLImageElement;
    const imageFace = document.getElementById('image-face') as HTMLImageElement;

    containerResult.style.display = 'flex';
    imageId.src = 'data:image/png;base64, ' + img;
    imageFace.src = 'data:image/png;base64, ' + imgLowQuality;
    
  } catch (ex) {
    console.error('Process error:', ex);
    
    // Handle specific error codes
    if (ex.code === FadSDK.Errors.Facetec.Session.CAMERA_NOT_RUNNING) {
      alert('Camera not supported, try another device');
    } else if (ex.code === FadSDK.Errors.Facetec.Session.INITIALIZATION_NOT_COMPLETED) {
      // Restart component
      console.log('Initialization not completed, restarting...');
    } else {
      console.error('Unexpected error:', JSON.stringify(ex));
    }
    
  } finally {
    // Always cleanup resources
    FAD_SDK.end();
  }
}

initProcess();

Document Capture Example

Capture and process identification documents with OCR:
import FadSDK from '@fad-producto/fad-sdk';
import { TOKEN, CREDENTIALS, CONFIGURATION } from './constants';

async function captureDocument() {
  const options = {
    environment: FadSDK.getFadEnvironments().UATHA
  };

  const FAD_SDK = new FadSDK(TOKEN, options);
  
  try {
    const idData = true;  // Enable OCR data extraction
    const idPhoto = true; // Enable face photo extraction from ID

    const regulaResponse = await FAD_SDK.startRegula(
      CREDENTIALS,
      FadSDK.Constants.Regula.CaptureType.CAMERA_SNAPSHOT,
      idData,
      idPhoto,
      CONFIGURATION
    );

    // Check if user closed the module
    if (regulaResponse.event === FadSDK.Constants.EventModule.MODULE_CLOSED) {
      console.log('Module closed by user');
      return;
    }

    // Process the captured document
    console.log('Document captured:', regulaResponse);
    
    // Access document images
    const frontImage = regulaResponse.data.id.front;
    const backImage = regulaResponse.data.id.back;
    const facePhoto = regulaResponse.data.idPhoto;
    const ocrData = regulaResponse.data.idData.ocr;

    // Display results
    document.getElementById('image-id-front').src = frontImage;
    if (backImage) {
      document.getElementById('image-id-back').src = backImage;
    }
    document.getElementById('image-face').src = facePhoto;
    document.getElementById('ocr').innerHTML = JSON.stringify(ocrData, null, 2);
    
  } catch (ex) {
    console.error('Capture error:', ex);
    
    if (ex.code === FadSDK.Errors.Regula.CAMERA_PERMISSION_DENIED) {
      alert('Camera permission denied. Please enable camera access.');
    } else if (ex.code === FadSDK.Errors.Regula.ID_PHOTO_NOT_FOUND) {
      alert('ID photo not found. Please retry the process.');
    } else if (ex.code === FadSDK.Errors.Regula.OCR_NOT_FOUND) {
      alert('OCR data not found. Please retry with a clearer image.');
    } else {
      alert(`Error: ${JSON.stringify(ex)}`);
    }
    
  } finally {
    FAD_SDK.end();
  }
}

Signature Module Example

Capture biometric signatures with face detection:
import FadSDK from '@fad-producto/fad-sdk';
import { CONFIGURATION, TOKEN } from './constants';

async function captureSignature() {
  const options = {
    environment: FadSDK.getFadEnvironments().UATHA
  };

  const FAD_SDK = new FadSDK(TOKEN, options);
  
  try {
    const signatureResponse = await FAD_SDK.startSignature(CONFIGURATION);

    if (signatureResponse.event === FadSDK.Constants.EventModule.MODULE_CLOSED) {
      console.log('Module closed by user');
      return;
    }

    console.log('Signature captured:', signatureResponse);

    // Create object URLs for video blobs
    const faceVideoUrl = URL.createObjectURL(signatureResponse.data.videoFace);
    const signatureVideoUrl = URL.createObjectURL(signatureResponse.data.videoSignature);

    // Display the captured data
    document.getElementById('face-video').src = faceVideoUrl;
    document.getElementById('signature-video').src = signatureVideoUrl;
    document.getElementById('signature-img').src = signatureResponse.data.imageSignature;
    
  } catch (ex) {
    console.error('Signature error:', ex);
    
    if (ex.code === FadSDK.Errors.Signature.NOT_ACCEPT_CAMERA_PERMISSION) {
      alert('Camera permission required for signature capture');
    } else {
      alert(`Error: ${JSON.stringify(ex)}`);
    }
    
  } finally {
    FAD_SDK.end();
  }
}

TypeScript Configuration

For TypeScript projects, ensure your tsconfig.json includes:
tsconfig.json
{
  "compilerOptions": {
    "target": "ES2020",
    "module": "ESNext",
    "moduleResolution": "node",
    "lib": ["ES2020", "DOM"],
    "esModuleInterop": true,
    "skipLibCheck": true,
    "strict": true
  }
}

Build Configuration

If using webpack, here’s a basic configuration:
webpack.config.js
module.exports = {
  mode: 'development',
  entry: './src/index.ts',
  module: {
    rules: [
      {
        test: /\.tsx?$/,
        use: 'ts-loader',
        exclude: /node_modules/,
      },
    ],
  },
  resolve: {
    extensions: ['.tsx', '.ts', '.js'],
  },
};
Always call FAD_SDK.end() in the finally block to properly cleanup resources and prevent memory leaks.

Next Steps

Best Practices

Learn SDK best practices and optimization techniques

Troubleshooting

Common issues and solutions

API Reference

Explore all available modules and methods

Modules

Learn about specific SDK modules

Build docs developers (and LLMs) love