Star illustrationStar illustrationStar illustration

Hello, what can we help you find?

We're here for you. Whether you need help building authentication into your banking app, need access control or HR verification and onboarding tools you can find all of that here.

Face Capture (React Native)

Estimated reading: 25 minutes 121 views

React Native Face Capture Framework Documentation

Table of Contents

  1. Overview
  2. Prerequisites
  3. Integration
    1. Barebones React Native Setup
      1. Android
      2. Swift
    2. Expo Setup
      1. Android
      2. Swift
  4. AwareID FaceCapture Enrollment API Guide
  5. Function Calls
  6. Example Code
  7. FAQ
  8. Support

Overview

The Face Capture SDK is a versatile tool designed for React Native apps. It’s specialized to assist in capturing facial images on iOS and Android devices. While it’s capable of functioning across different areas of your app simultaneously, it’s important to remember that it can conduct only one face capture session at a time.

Here’s a breakdown of the features provided by the Face Capture SDK:

  • Camera Control: The SDK enables you to initialize and manage your device’s camera, preparing it for use.
  • Camera Selection: It empowers you to choose the camera (for example, the front or rear camera) best suited for capturing facial images.
  • Image Preview: The SDK can present a preview of the image to users before the capture is confirmed, allowing for adjustments if necessary.
  • Image Processing and User Guidance: Once an image is captured, the SDK processes it and generates feedback codes. These codes serve as guides to users throughout the capture process.
  • Capture Time Limit: With this feature, you can set a maximum duration for a face capture attempt, maintaining efficient use of time and resources.
  • Capture Session Updates: The SDK keeps you informed about the status of the face capture session, promoting transparency and control.
  • JSON Package Creation: Finally, the SDK is capable of constructing a JSON package compatible with the AwareID Server, ensuring seamless data communication.

In essence, the Face Capture SDK is a robust yet user-friendly tool that simplifies the process of capturing facial images on iOS and Android devices within your React Native app, making it a practical choice for efficient and effective biometric data collection.

Prerequisites

The Face Capture SDK has a few necessary requirements in order to function properly. Let’s go through them:

Basic React Native Requirements

Before proceeding with the installation of the Face Capture SDK, ensure you have the basic environment for React Native development set up. The following are essential:

React: Version 17.0.2 or higher
React Native: Version 0.67.4 or higher
Node.js: Version 16.14.1 or higher
NPM: Ensure you have the latest stable version installed
Ruby: Version 3.1.0p0 or higher
CocoaPods: Version 1.11.2 or higher

Optional but recommended for easy setup and testing:

Expo: This is a tool that helps to set up a React Native app quickly. If you're comfortable with it, feel free to use Expo for setting up your project.

Specific Requirements

Android

If you are developing for Android, your environment needs to meet these additional requirements:

Android OS: Version 8.0 or higher
Android NDK: Version 20 or higher
API Level: The device must support API 24 or higher
Camera2 API: The device should support Camera2 integration
Camera Permissions: Users must grant access to CAMERA permissions

iOS

For iOS development, the following are necessary:

iOS: Version 10 or higher
Camera Permissions: Users must grant access to CAMERA permissions

Please make sure your development environment aligns with these requirements before proceeding with the Face Capture SDK integration.

Integration

Prerequisite for installing the Face Capture SDK from our private repository:

Step 1: Set up .npmrc to use our private npm registry

Before proceeding with the integration of either Face Capture or Document Capture SDK, ensure that your project is set up to use our private npm registry.

  1. Create a .npmrc file in the root directory of your project.
  2. Add the following lines:
@aware-mcoe:registry=https://npm.pkg.github.com
//npm.pkg.github.com/:_authToken=${NPM_TOKEN}

Step 2: Export the aware-mcoe npm token

  • Mac:
export NPM_TOKEN=Token we provide you
  • Windows:

PowerShell:

$env:NPM_TOKEN="Token we provide you."

or CMD to export the token:

SET NPM_TOKEN=Token we provide you.

React Native Setup Without Expo

Android

React Native Android Integration Guide for Face Capture

This guide provides a step-by-step procedure for integrating our SDK into your new React Native application on Android.

Step 1:

Ensure that you’ve followed the .npmrc setup in the “Prerequisites” section above.

Step 2:

Run the command below to install the Face Capture SDK:

npm i @aware-mcoe/awrn-face-capture

Or you can modify your package.json to include the following line:

"@aware-mcoe/awrn-face-capture": "0.0.1"

Step 3: Install Dependencies

Run the following command in the root directory of your project:

npm install

Step 4:

Add these permissions to your Android Manifest file:

<uses-permission android:name="android.permission.INTERNET" />    
<uses-permission android:name="android.permission.CAMERA"/>

Once all these steps are complete, your application should be ready to use our SDK. Please save all the changes and build your project.

iOS

React Native iOS Integration Guide for Face Capture

This guide provides a step-by-step procedure for integrating our SDK into your new React Native application on iOS.

Step 1: Set up .npmrc to use our private npm registry

Ensure that you’ve followed the .npmrc setup in the “Prerequisites” section above.

Step 2:

Run the command below to install the Face Capture SDK:

npm i @aware-mcoe/awrn-face-capture

Or you can modify your package.json to include the following line:

"@aware-mcoe/awrn-face-capture": "0.0.1"

Step 3: Install Dependencies

Run the following command in the root directory of your project:

npm install

Step 4:

Update your Info.plist file to include the following key:

<key>NSCameraUsageDescription</key>
<string>Is used to access your device's camera</string>

Once all these steps are complete, your application should be ready to use our SDK. Please save all the changes and build your project.


Expo Setup

Android

Step 1: Expo Eject

In order to properly integrate the Face Capture SDK into your expo project you will first need to run the expo eject command, this will allow you to make changes to the underlying Swift and Java code required to use the native modules.

Step 2: Follow the previous instructions included in the barebones React Native setup for Android.

Swift

Step 1: Expo Eject

In order to properly integrate the Face Capture SDK into your expo project you will first need to run the expo eject command, this will allow you to make changes to the underlying Swift and Java code required to use the native modules.

Step 2: Follow the previous instructions included in the barebones React Native setup for Swift.

If you run into any issues please go into your iOS folder and run pod deintegrate followed by pod install, also make sure you run npm install in the base directory.

AwareID FaceCapture Enrollment API Guide

Face SDK and AwareID Enrollment Workflows

To perform a successful enroll using the Face SDK and AwareID, there are two possible workflows. The first workflow is the most flexible and involves initiating the enrollment from the client. The second workflow option is more secure and involves the use of a secondary application to initiate the enrollment and generate a QR code encoded with the session token necessary to proceed with the enrollment. The QR code is scanned from the client application, and then enrollment proceeds using the data encoded in the QR code.

Enrollment Initiated from Client

To enroll by initiating from the client, follow these 5 steps:

  1. Retrieve an access token. This token allows communication between the client application and the AwareID servers.
  2. Get Public Key – Public Key for face data encryption.
  3. Initiate an enrollment.
  4. Add device.
  5. Enroll face.

Step 1 – Get Access Token

Request

// Get Access Token
POST /auth/realms/{{customer_name}}-consumers/protocol/openid-connect/token
Content-Type: 'application/x-www-form-urlencoded',

"client_id": client_id
"client_secret": client_secret
"scope": openid
"grant_type" : client_credentials

Response

   STATUS CODE 200
   {
       "access_token": "eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJCY2IxNXZJQkZsY2JYazVmQUdJZFZXV2pTUEtTaWpsazNydmFwMHp0ekN3In0.eyJleHAiOjE2NzM5OTExMjksImlhdCI6MTY3Mzk5MDgyOSwianRpIjoiN2MzYmY1MmItNjdlMC00ODNlLWFhZjAtYjlkNWJhODE3ZWJiIiwiaXNzIjoiaHR0cHM6Ly9hd2FyZWlkLWRldi5hd2FyZS1hcGlzLmNvbS9hdXRoL3JlYWxtcy9hbmRyYWUtY29uc3VtZXJzIiwic3ViIjoiOTU3ZWMyYmYtZTczOS00YjFjLWEyN2QtMTczMjQzMDIyYTE5IiwidHlwIjoiQmVhcmVyIiwiYXpwIjoiYmltYWFzLWIyYyIsImFjciI6IjEiLCJzY29wZSI6Im9wZW5pZCIsImNsaWVudElkIjoiYmltYWFzLWIyYyIsImNsaWVudEhvc3QiOiIzOC4xNDAuNTkuMjI2IiwiY2xpZW50QWRkcmVzcyI6IjM4LjE0MC41OS4yMjYifQ.OzggQ--Gl4w3NWZPg1BukkEg0fmsSyGgN-ag8eW0FARWl0Ic5fkrnrEdnIgsq5Molq0R52oe4Hy-8Tp4cOn9iCD51kPCPfTt15zVBIAYOvb5M5XZ0uPTygh02KjuFqsxIhbhH8CCUjHkpu3OhoWByc8bC8c9D_cFp3BFE-XIhNPaPxXdTLZOcJOqpdSVxsgxB66-xukI7AA8PWt10huO47l6TSBSnJIjUxNbEqR48ILfnkYY2bmyfoo-laKDv9XSSZ8hXU9sDkiGfpXOl112_f3L1sc6n1-UbRTJGFMd4fgntuanwEvN68TsyS5pz0izGlW-1T3fFJ3D2pGPefsWNA",
       "expires_in": 300,
       "refresh_expires_in": 0,
       "token_type": "Bearer",
       "id_token": "eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJCY2IxNXZJQkZsY2JYazVmQUdJZFZXV2pTUEtTaWpsazNydmFwMHp0ekN3In0.eyJleHAiOjE2NzM5OTExMjksImlhdCI6MTY3Mzk5MDgyOSwiYXV0aF90aW1lIjowLCJqdGkiOiJkYWNiNTc1NS1jMGEyLTQxZTEtYjMwMi05ZGEzOWRiNGNiYmUiLCJpc3MiOiJodHRwczovL2F3YXJlaWQtZGV2LmF3YXJlLWFwaXMuY29tL2F1dGgvcmVhbG1zL2FuZHJhZS1jb25zdW1lcnMiLCJhdWQiOiJiaW1hYXMtYjJjIiwic3ViIjoiOTU3ZWMyYmYtZTczOS00YjFjLWEyN2QtMTczMjQzMDIyYTE5IiwidHlwIjoiSUQiLCJhenAiOiJiaW1hYXMtYjJjIiwiYXRfaGFzaCI6IlcwbXNUU05WQUo1MG9oQ2JOR3dlTmciLCJhY3IiOiIxIiwiY2xpZW50SWQiOiJiaW1hYXMtYjJjIiwiY2xpZW50SG9zdCI6IjM4LjE0MC41OS4yMjYiLCJjbGllbnRBZGRyZXNzIjoiMzguMTQwLjU5LjIyNiJ9.MOgJ3giF0ikQnUAOBgK6eHpC0Tz3pCjhTX4IjHSjh3kzxx0KCLiWd494Fl3JSHiyvnNP7Ty1SXl4Bhq19f7y_lpGp4yLkbV9I1xsfC7m2D-EIf73D1LEluf1y97ISbh8668VqnGRG8U1FtXuwQGPZb7cgMiTbprECwLFj44_vM2qmLxFpOkOuVaqPmpgjt6MAmUbcWV8GDMAdxVnlZDZuzFkwOlb6S_WypNSYKHA6TFIe_FsA2EoxMu_9MAP3OLX7LIwX3jYIsT4z-TnUmyKC5RFzx6oc9D9Fr2eSTRBxC6QKGJrFAPt40p9_U3YFFi6VpzaGK9YQvCvdw70CVBe5Q",
       "not-before-policy": 0,
       "scope": "openid"
   }

Step 2 – Initiate An Enrollment

Request

// Initiate An Enrollment
POST /onboarding/enrollment/enroll
Authorization: 'Bearer AccessToken'
apikey: 'apikey'

{    
    "username":  "username",
    "firstName": "first name", //optional
    "lastName": "last name", //optional 
    "email": "user email", 
    "phoneNumber": "user phonenumber" //optional
} 

Response

{
    "enrollmentToken": "enrollmentToken",
    "userExistsAlready": false,
    "requiredChecks": [
        "addDevice",
        "addFace"
    ]
}

Step 3 – Add Device

Request

// Add Device
POST /onboarding/enrollment/addDevice
Authorization: 'Bearer AccessToken'
apikey: 'apikey'

{
    "enrollmentToken": "enrollmentToken",
    "deviceId": "deviceID"
} 

Response

{
    "enrollmentStatus": 1,
    "registrationCode": ""
}

Step 4 – Get Public Key – Public Key for face data encryption

Request

// Get Public Key
GET /getPublicKey
"Authorization":"Bearer accessToken"
"apikey": apiKey

Response

-----BEGIN PUBLIC KEY-----
//Public key example
-----END PUBLIC KEY-----

Step 5 – Add Face – Add encrypted face sample and check if face belongs to live person.

Request

{
  "enrollmentToken": "aa54f2ac-2b03-4af3-bc2f-a0f5f97c55e3",
  "faceLivenessData": {
    "iv": "Initialization Vector",
    "key": "Public key retrieved earlier",
    "p": "Encrypted face package"
  }
}

Response

{
    "livenessResult": true,
    "enrollmentStatus": 2,
    "registrationCode": "LLXL2N",
    "livenessResults": {
        "video": {
            "liveness_result": {
                "decision": "LIVE",
                "feedback": [],
                "score_frr": 1.1757098732441127
            }
        }
    }
}

The response from the enrollment call returns:

Liveness Result

  • This is a boolean value.
  • returns true if the sample is assessed to be a live sample
  • returns false is the sample is assessed to not be live
    Enrollment Status
  • 0 = failed
  • 1 = pending
  • 2 = success
    Registration Code
  • this code is used in the re-enrollment process
    Liveness Result
  • Broken down into several fields giving feedback on the liveness score

Enrollment Initiated Through QR Code

To initiate enrollment via QR code scanning, we’ll first capture necessary enrollment details such as first name, last name, email address, and phone number. These details are then used to generate the QR code through a secondary application. In this example, we’ll utilize a web app.

QR Code Generation: Web Portal

To begin enrollment using the QR code method, we first generate the QR code. This code will be used by the client application (in this example, a web app). Our web app provides a user interface for end-users to register with a username and an email address. These details are then utilized in the API Call for /triggerEnroll.

Base URL

www.awareid.aware-apis.com

Trigger Enroll API

Initiate the enrollment process by calling the /triggerEnroll endpoint with the access token you received. The following parameters are required:

Request

POST baseUrl + /b2c/proxy/triggerEnroll
Content-Type: 'application/json; charset=UTF-8',

{ "username": "[email protected]",
    "email": "[email protected]",
    "notifyOptions": {
        "notifyByEmail": false
    }
}

Response

   STATUS CODE 200
   {
       status : "SUCCESS"
       sessionCallbackURL : "https://awareid-dev.aware-apis.com/onboarding?data=jgTW40dmoG6Hp_d6Rg7YaZ97vfGSlV5BcBJvLvqXVmhoQ2Hg2DcC2Kvr9AkTZ38ZkyIfiSj80QFxOWs1YeckYsp3D0D9vS46wppl1Zdt-tpiAdzlvBKA2DBfcj7rf0VePWUn1vKdIPgEoWAulqRxZ_mNakFB7FijLg0QJ8kYsB6w0Nk1A4m9QtLGIdHcuGn9XJnxooQHyr2yhtUsgfOo2FrRXYmFIF7ZNwxYd56miFCs-yuD6eZZcvZ1M01Wje7ji0NYUWVpdes-DA_P0cKgsLPX5sV7SyPSlf9kmoCQz7Ag20kAKkOf-LFFKQmgnJ3362nXIEovxS8vp4BCClu7vIfEVCE2s1zS7zNwrDuRfFdViVAQMMxDMe77LnbKbfvLqUhiv--wPFyV9Iier1EDSL9y5kikOw_PGSyuRzvbQKuoNdGj-IqVZYZ_5QivOFqq_OEt8jaX1zZxAiQ8uXRt3g"
       qrcodeImage : "base64EncodedQRString"
       sessionToken : "aa73e547-0f1b-4235-a7b0-dd52fa4ab774"
       errorSummary : null
   }

The response for the trigger enroll includes five key pieces of information, with the most pertinent being the base64 encoded string representing the QR code. This QR code will be displayed to the user to proceed with the enrollment on their device.

QR Code Implementation

With our QR code generated via the web application, the next steps involve scanning the QR code and completing the enrollment. The following steps apply to enrolling a user from the client side application using a QR code:

  1. Scan and decrypt the QR code data.
  2. Retrieve the Public Key for face data encryption.
  3. Initiate the enrollment process.
  4. Add device.
  5. Enroll face.

The QR Code will return a URL with an encrypted data parameter named “data”. This data should be decrypted using the provided public key. Once decoded, you will have 3 separate pieces of information:

  • Host URL: This is the URL for all subsequent API calls.
  • API Key: Used in the header of API calls. The key-value pair in the header is as follows: “apikey”:API_Key
  • Session Token: Used to validate the session.

Enrollment Using QR Code Step 1 – Validate Session Token

The first api call necessary to enroll a user is /tokenVerify/validateSession.

Request

POST /tokenVerify/validateSession
"Content-Type": 'application/json; charset=UTF-8',
"apikey": apiKey

{
    "sessionToken":sessionToken
}

Response

{
    "accessToken": "accessToken",
    "methodType": "enroll",
    "customerName": "customerName",
    "customerLogo": "",
    "userName": "customerUsername",
    "email": "customerEmail"
}

Step 2 – Initiate An Enrollment

Request

// Initiate An Enrollment
POST /onboarding/enrollment/enroll
Authorization: 'Bearer AccessToken'
apikey: 'apikey'

{    
    "username":  "username",
    "firstName": "first name", //optional
    "lastName": "last name", //optional 
    "email": "user email", 
    "phoneNumber": "user phonenumber" //optional
} 

Response

{
    "enrollmentToken": "enrollmentToken",
    "userExistsAlready": false,
    "requiredChecks": [
        "addDevice",
        "addFace"
    ]
}

Step 3 – Add Device

Request

// Add Device
POST /onboarding/enrollment/addDevice
Authorization: 'Bearer AccessToken'
apikey: 'apikey'

{
    "enrollmentToken": "enrollmentToken",
    "deviceId": "deviceID"
} 

Response

{
    "enrollmentStatus": 1,
    "registrationCode": ""
}

Step 4 – Get Public Key – Public Key for face data encryption

Request

// Get Public Key
GET /getPublicKey
"Authorization":"Bearer accessToken"
"apikey": apiKey

Response

-----BEGIN PUBLIC KEY-----
//Public key example
-----END PUBLIC KEY-----

Step 5 – Add Face – Add encrypted face sample and check if face belongs to live person.

Request

{
  "enrollmentToken": "aa54f2ac-2b03-4af3-bc2f-a0f5f97c55e3",
  "faceLivenessData": {
    "iv": "Initialization Vector",
    "key": "Public key retrieved earlier",
    "p": "Encrypted face package"
  }
}

Response

{
    "livenessResult": true,
    "enrollmentStatus": 2,
    "registrationCode": "LLXL2N",
    "livenessResults": {
        "video": {
            "liveness_result": {
                "decision": "LIVE",
                "feedback": [],
                "score_frr": 1.1757098732441127
            }
        }
    }
}

The response from the enrollment call returns:

Liveness Result

  • This is a boolean value.
  • returns true if the sample is assessed to be a live sample
  • returns false is the sample is assessed to not be live
    Enrollment Status
  • 0 = failed
  • 1 = pending
  • 2 = success
    Registration Code
  • this code is used in the re-enrollment process
    Liveness Result
  • Broken down into several fields giving feedback on the liveness score

Example Code

//  CaptureScreen.js
//  Copyright © 2022 Aware Inc. All rights reserved.

import React, { useState, useEffect, useContext, useRef } from "react";
import { View, Button, Image, ImageBackground, Text, TextInput, Modal, Pressable, NativeModules, Dimensions, Platform } from "react-native";
import { AppStyleGeneral, AppStyleLiveness, AppStyleImageCapture } from "../components/Styles";
import * as AppSettings from "../components/Settings";
import * as livenessTemplate from "../json/livenessBody.json";
import * as postRqst from "../json/fetchPostReq.json";
import { FaceCapture } from "../face_capture/face_capture";
import { CaptureSessionStatus, AutoCaptureFeedback, WorkflowProperty } from "../face_capture/face_capture_enums";
import Canvas from "react-native-canvas";
import { Images } from "../images/Images";
import AndroidImage from "../face_capture/face_capture_android_image";

/**
 * Converts an integer value to the corresponding CaptureSessionStatusEnum value.
 * @param {number} value - The integer value to convert.
 * @returns {CaptureSessionStatusEnum|null} - The corresponding CaptureSessionStatusEnum value, or null if not found.
*/
function ConvertIntToCaptureSessionStatusEnum(value) {
    switch(value) {
        case CaptureSessionStatus.IDLE.value:
            return CaptureSessionStatus.IDLE;
        case CaptureSessionStatus.STARTING.value:
            return CaptureSessionStatus.STARTING;
        case CaptureSessionStatus.CAPTURING.value:
            return CaptureSessionStatus.CAPTURING;
        case CaptureSessionStatus.POST_CAPTURE_PROCESSING.value:
            return CaptureSessionStatus.POST_CAPTURE_PROCESSING;
        case CaptureSessionStatus.COMPLETED.value:
            return CaptureSessionStatus.COMPLETED;
        case CaptureSessionStatus.ABORTED.value:
            return CaptureSessionStatus.ABORTED;
        case CaptureSessionStatus.STOPPED.value:
            return CaptureSessionStatus.STOPPED;
        case CaptureSessionStatus.TIMED_OUT.value:
            return CaptureSessionStatus.TIMED_OUT;
        default:
            break;
    }
    return null;
}

/**
 * Converts an integer value to the corresponding AutoCaptureFeedbackEnum value.
 * @param {number} value - The integer value to convert.
 * @returns {AutoCaptureFeedbackEnum|null} - The corresponding AutoCaptureFeedbackEnum value, or null if not found.
*/
function ConvertIntToAutoCaptureFeedbackEnum(value) {
    switch(value) {
        case AutoCaptureFeedback.FACE_COMPLIANT.value:
            return AutoCaptureFeedback.FACE_COMPLIANT;
        case AutoCaptureFeedback.NO_FACE_DETECTED.value:
            return AutoCaptureFeedback.NO_FACE_DETECTED;
        case AutoCaptureFeedback.MULTIPLE_FACES_DETECTED.value:
            return AutoCaptureFeedback.MULTIPLE_FACES_DETECTED;
        case AutoCaptureFeedback.INVALID_POSE.value:
            return AutoCaptureFeedback.INVALID_POSE;
        case AutoCaptureFeedback.FACE_TOO_FAR.value:
            return AutoCaptureFeedback.FACE_TOO_FAR;
        case AutoCaptureFeedback.FACE_TOO_CLOSE.value:
            return AutoCaptureFeedback.FACE_TOO_CLOSE;
        case AutoCaptureFeedback.FACE_ON_LEFT.value:
            return AutoCaptureFeedback.FACE_ON_LEFT;
        case AutoCaptureFeedback.FACE_ON_RIGHT.value:
            return AutoCaptureFeedback.FACE_ON_RIGHT;
        case AutoCaptureFeedback.FACE_TOO_HIGH.value:
            return AutoCaptureFeedback.FACE_TOO_HIGH;
        case AutoCaptureFeedback.FACE_TOO_LOW.value:
            return AutoCaptureFeedback.FACE_TOO_LOW;
        case AutoCaptureFeedback.INSUFFICIENT_LIGHTING.value:
            return AutoCaptureFeedback.INSUFFICIENT_LIGHTING;
        case AutoCaptureFeedback.LEFT_EYE_CLOSED.value:
            return AutoCaptureFeedback.LEFT_EYE_CLOSED;
        case AutoCaptureFeedback.RIGHT_EYE_CLOSED.value:
            return AutoCaptureFeedback.RIGHT_EYE_CLOSED;
        case AutoCaptureFeedback.DARK_GLASSES_DETECTED.value:
            return AutoCaptureFeedback.DARK_GLASSES_DETECTED;
        default:
            break;
    }
}

const CaptureScreen = ({route, navigation}) => {

    // Constants and state variables
    const canvasWidth = 480;
    const canvasHeight = 640;
    const captureUpdateInterval = 33;
    const isAndroid = (Platform.OS === "android");
    const [isLivenessResultVisible, setLivenessResultVisible] = useState(false);
    const [isOtherResponseVisible, setOtherResponseVisible] = useState(false);
    const [currentLivenessResultHeader, setCurrentLivenessResultHeader] = useState("");
    const [currentLivenessResultDecision, setCurrentLivenessResultDecision] = useState("");
    const [currentLivenessResultImage, setCurrentLivenessResultImage] = useState("");
    const [currentOtherResponseHeader, setCurrentOtherResponseHeader] = useState("");
    const [currentOtherResponseBody, setCurrentOtherResponseBody] = useState("");
    const currentCaptureSessionFrameRef = useRef(null);
    const currentCaptureSessionStatusRef = useRef(null);
    const currentCaptureSessionFeedbackRef = useRef(null);
    const currentCaptureSessionRacetrackRef = useRef(null);

    // Variables for capture session
    let isCapturing = false;
    let hasDrawnRacetrack = false;
    let currentFaceCapture = null;
    let currentWorkflow = null;
    let currentCamera = null;
    let currentCameraList = [];
    let currentCaptureState = null;
    let currentCaptureSessionFrame = null;
    let currentCaptureSessionFrameFull = null;
    let currentCaptureSessionStatus = 0;
    let currentCaptureSessionFeedback = 0;
    let currentCaptureServerPackage = "";
    let currentCaptureLivenessDecision = "";
    let currentCaptureRegion = [];
    let defaultCaptureSessionStatusName = CaptureSessionStatus.IDLE.description;
    let defaultCaptureSessionFeedbackName = AutoCaptureFeedback.NO_FACE_DETECTED.description;
    let settingFaceLivenessServerURL = "";
    let settingUsername = "";
    let settingCaptureTimeout = 0.0;
    let settingCaptureProfile = "";
    let settingWorkflow = "";
    let settingPackageType = "";
    let settingCameraPosition = 0;
    let settingCameraOrientation = 0;
    let captureStateTimer = null;

    /**
     * Retrieves the capture data from the settings.
    */
    async function getCaptureData() {
        try {
          settingFaceLivenessServerURL =
            (await AppSettings.getStringValue("faceLivenessServerURL")) ??
            "https://mobileauth.aware-demos.com/faceliveness";
          settingUsername =
            (await AppSettings.getStringValue("username")) ?? "Jordan";
          settingCaptureTimeout =
            (await AppSettings.getDoubleValue("captureTimeout")) ?? 0.0;
          settingCaptureProfile =
            (await AppSettings.getStringValue("captureProfile")) ??
            "face_capture_foxtrot_client.xml";
          settingWorkflow =
            (await AppSettings.getStringValue("workflow")) ?? "charlie4";
          settingPackageType =
            (await AppSettings.getIntegerValue("packageType")) ?? "HIGH_USABILITY";
          settingCameraPosition =
            (await AppSettings.getIntegerValue("cameraPosition")) ?? "FRONT";
          settingCameraOrientation =
            (await AppSettings.getIntegerValue("cameraOrientation")) ?? "PORTRAIT";
        } catch (e) {
          console.log(e);
        }
      }

    /**
     * Sets up the capture by creating the necessary objects and configuring settings.
    */
    async function setupCapture() {
        try {
            currentFaceCapture = new FaceCapture();
            currentWorkflow = await currentFaceCapture.workflowCreate(settingWorkflow);
            await currentWorkflow.setStringProperty(WorkflowProperty.USERNAME.value, settingUsername);
            await currentWorkflow.setDoubleProperty(WorkflowProperty.CAPTURE_TIMEOUT.value, settingCaptureTimeout);
            await currentWorkflow.setStringProperty(WorkflowProperty.CAPTURE_PROFILE.value, settingCaptureProfile);
            currentCameraList = await currentFaceCapture.getCameraList(settingCameraPosition);
            currentCamera = currentCameraList[0];
            await currentCamera.setOrientation(settingCameraOrientation);
        }
        catch (e) {
            console.log(e);
        }
    }

    /**
     * Cleans up the capture by destroying the objects.
    */
    async function cleanupCapture() {
        try {
            if (currentCaptureState != null) {
                currentCaptureState.destroy();
                currentCaptureState = null;
            }
            if (currentCamera != null) {
                currentCamera.destroy();
                currentCamera = null;
            }
            if (currentWorkflow != null) {
                currentWorkflow.destroy();
                currentWorkflow = null;
            }
        }
        catch (e) {
            console.log(e);
        }
    }

    /**
     * Starts the capture session and enables capturing.
    */
    async function startCaptureSession() {
        try {
            await currentFaceCapture.startCaptureSession(currentWorkflow, currentCamera);
            isCapturing = true;
            if (currentCaptureSessionRacetrackRef) {
                currentCaptureRegion = await currentFaceCapture.captureSessionGetCaptureRegion();
                drawRaceTrack(
                    currentCaptureSessionRacetrackRef.current.context2D.canvas,
                    canvasWidth,
                    canvasHeight,
                    currentCaptureRegion);
            }
        }
        catch (e) {
            console.log(e);
        }
    }

    /**
     * Stops the capture session and disables capturing.
    */
    async function stopCaptureSession() {
        try {
            await currentFaceCapture.stopCaptureSession();
            isCapturing = false;
        }
        catch (e) {
            console.log(e);
        }
    }

    /**
     * Handles the completion of the capture session.
     * Updates the liveness result and displays it.
    */
    async function handleCaptureCompleted() {
        isCapturing = false;
        currentCaptureServerPackage = await currentFaceCapture.getServerPackage(currentWorkflow, settingPackageType);
        currentCaptureLivenessDecision = await getLivenessResult(currentCaptureServerPackage);
        setCurrentLivenessResultHeader("Completed");
        setCurrentLivenessResultDecision(currentCaptureLivenessDecision);
        setCurrentLivenessResultImage(currentCaptureSessionFrameFull);
        setLivenessResultVisible(true);
    }
    /**
     * Handles the abortion of the capture session.
     * Displays an "Aborted" message.
    */
    async function handleCaptureAborted() {
        isCapturing = false;
        setCurrentOtherResponseHeader("Aborted");
        setCurrentOtherResponseBody("Capture was aborted.");
        setOtherResponseVisible(true);
    }

    /**
     * Handles the timeout of the capture session.
     * Displays a "Timed Out" message.
    */
    async function handleCaptureTimedOut() {
        isCapturing = false;
        setCurrentOtherResponseHeader("Timed Out");
        setCurrentOtherResponseBody("Capture was timed out.");
        setOtherResponseVisible(true);
    }

    /**
     * Sends the server package to the face liveness server and retrieves the liveness result.
     * @param {string} serverPackage - The server package to send.
     * @returns {string|null} - The liveness decision obtained from the server, or null if not available.
    */
    async function getLivenessResult(serverPackage) {
        try {
            const url = settingFaceLivenessServerURL + "/checkLiveness";
            const response = await fetch(url, {
                method: "POST",
                headers: {
                    "Content-Type": "application/json",
                },
                body: serverPackage,
            });
            const json = await response.json();
            if (json.video.liveness_result && json.video.liveness_result.decision) {
                return json.video.liveness_result.decision;
            }
        }
        catch (error) {
            console.error(error);
        }
        return null;
    }

    /**
     * Draws the race track on the canvas for where to position the face.
     * @param {HTMLCanvasElement} canvas - The canvas element to draw on.
     * @param {number} canvas_width - The width of the canvas.
     * @param {number} canvas_height - The height of the canvas.
     * @param {number[]} capture_region - The capture region coordinates.
    */
    async function drawRaceTrack(canvas, canvas_width, canvas_height, capture_region) {
        const window = Dimensions.get("window");
        const window_w = window.width;
        const window_h = window.height;
        try {
            if (canvas != null) {
                const ctx = await canvas.getContext("2d");
                canvas.width = canvas_width;
                canvas.height = canvas_height;
                ctx.clearRect(0, 0, canvas.width, canvas.height);
                if (capture_region.length == 4) {
                    let roi_x = capture_region[0];
                    let roi_y = capture_region[1];
                    let roi_w = capture_region[2];
                    let roi_h = capture_region[3];
                    let x = (roi_x / canvas_width) * window_w;
                    let y = (roi_y / canvas_height) * window_h;
                    let w = (roi_w/ canvas_width) * window_w;
                    let h = (window_h / 2) * (1 / (canvas_width / canvas_height));
                    let r = w / 2;
                    if (w < 2 * r) {
                        r = w / 2;
                    }
                    if (h < 2 * r) {
                        r = h / 2;
                    }
                    ctx.lineWidth = 10;
                    ctx.strokeStyle = "red";
                    ctx.globalAlpha = 0.6;
                    ctx.beginPath();
                    ctx.moveTo(x + r, y);
                    ctx.arcTo(x + w, y, x + w, y + h, r);
                    ctx.arcTo(x + w, y + h, x, y + h, r);
                    ctx.arcTo(x, y + h, x, y, r);
                    ctx.arcTo(x, y, x + w, y, r);
                    ctx.closePath();
                    ctx.stroke();
                }
            }
        }
        catch (e) {
            console.log(e);
        }
    }

    // useEffect hook to cleanup when the component is unmounting
    useEffect(() => {
        /**
         * Cleanup function to stop the capture session and clean up resources.
         * Triggered when the component is unmounting or transitioning out.
        */
        const unsubscribe = navigation.addListener("transitionStart", async e => {
            if (e.data.closing) {
                await stopCaptureSession();
                await cleanupCapture();
            }
        });
        return unsubscribe;
    }, [navigation]);

    // This effect hook captures updates from the capture state. It's an async function that performs
    // several operations every time it's called.
    useEffect(() => {
        /**
         * Async function that handles updates from the capture state.
         * Performs various operations based on the current state.
        */
        async function onStateUpdate() {
            try {
                // If the capture session is active
                if (isCapturing) {

                    // Clear existing capture state if it exists
                    if (currentCaptureState != null) {
                        currentCaptureState.destroy();
                        currentCaptureState = null;
                    }

                    // Get new capture state from the current face capture object
                    currentCaptureState = await currentFaceCapture.getCaptureSessionState();
                    if (currentCaptureState == null) {
                        // If the capture state is null, wait for the specified interval before checking again
                        captureStateTimer = setTimeout(onStateUpdate, captureUpdateInterval);
                        return;
                    }

                    // Get the current status of the capture session
                    currentCaptureSessionStatus = await currentCaptureState.getStatus();
                    let newCaptureSessionStatus = ConvertIntToCaptureSessionStatusEnum(currentCaptureSessionStatus);
                    // Update the text field that displays the current capture session status
                    if (currentCaptureSessionStatusRef) {
                        currentCaptureSessionStatusRef.current.setNativeProps(
                            { text: newCaptureSessionStatus.description }
                        );
                    }

                    // Handle different types of session status
                    switch(currentCaptureSessionStatus) {
                        case CaptureSessionStatus.COMPLETED.value:
                            handleCaptureCompleted();
                            return;
                        case CaptureSessionStatus.ABORTED.value:
                            handleCaptureAborted();
                            return;
                        case CaptureSessionStatus.TIMED_OUT.value:
                            handleCaptureTimedOut();
                            return;
                        default:
                            break;
                    }

                    // Get the feedback of the capture session
                    currentCaptureSessionFeedback = await currentCaptureState.getFeedback();
                    let newCaptureSessionFeedback = ConvertIntToAutoCaptureFeedbackEnum(currentCaptureSessionFeedback);

                    // Update the text field that displays the current capture session feedback
                    if (currentCaptureSessionFeedbackRef) {
                        currentCaptureSessionFeedbackRef.current.setNativeProps({
                            text: newCaptureSessionFeedback.description
                        });
                    }

                    // Get the captured frame from the session
                    currentCaptureSessionFrame = await currentCaptureState.getFrame();
                    currentCaptureSessionFrameFull = "data:image/jpg;base64," + currentCaptureSessionFrame;
                    // Display the captured frame on the screen
                    if (currentCaptureSessionFrameRef) {
                        if (isAndroid) {
                            currentCaptureSessionFrameRef.current.setNativeProps({
                                source: { uri: currentCaptureSessionFrameFull }
                            });
                        }
                        else {
                            currentCaptureSessionFrameRef.current.setNativeProps({
                                source: [{ uri: currentCaptureSessionFrameFull }]
                            });
                        }
                    }

                    // Draw the capture region on the screen
                    if (currentCaptureSessionRacetrackRef && !hasDrawnRacetrack) {
                        currentCaptureRegion = await currentFaceCapture.captureSessionGetCaptureRegion();
                        drawRaceTrack(
                            currentCaptureSessionRacetrackRef.current.context2D.canvas,
                            canvasWidth,
                            canvasHeight,
                            currentCaptureRegion);
                        hasDrawnRacetrack = true;
                    }
                }
            }
            catch (e) {
                console.log(e);
            }
            // Repeat this function after the specified interval
            captureStateTimer = setTimeout(onStateUpdate, captureUpdateInterval);
        }
        captureStateTimer = setTimeout(onStateUpdate, captureUpdateInterval);
        // Clear the timer when the component is unmounted
        return () => {
            clearInterval(captureStateTimer);
        };
    }, []);

    // This effect hook calls the main function to setup and start the capture session
    useEffect(() => {
        /**
         * Async function that sets up and starts the capture session.
         * Gets capture data, initializes the face capture library, and starts the capture session.
        */
        async function main() {
            // Get capture data from local storage or default settings
            await getCaptureData();
            // Initialize the face capture library
            await setupCapture();
            // Start the face capture session
            await startCaptureSession();
        };
        main();
    }, []);

    // The return statement contains the layout of the screen
    return (
        <View>
            <View style={AppStyleImageCapture.statusMessage}>
                <TextInput
                    ref={currentCaptureSessionStatusRef}
                    style={AppStyleGeneral.action_text}
                    defaultValue={defaultCaptureSessionStatusName}
                    editable={false}/>
                <TextInput
                    ref={currentCaptureSessionFeedbackRef}
                    style={AppStyleGeneral.action_text}
                    defaultValue={defaultCaptureSessionFeedbackName}
                    editable={false}/>
            </View>
            <View style={AppStyleImageCapture.displayArea}>
                {isAndroid ? (
                    <AndroidImage
                        ref={currentCaptureSessionFrameRef}
                        style={AppStyleImageCapture.image}>
                    </AndroidImage>
                ) : (
                    <Image
                        ref={currentCaptureSessionFrameRef}
                        style={AppStyleImageCapture.image}>
                    </Image>
                ) }
                <View style={AppStyleImageCapture.raceTrackView}>
                    <Canvas ref={currentCaptureSessionRacetrackRef}/>
                </View>
            </View>
            <Modal
                animationType="slide"
                transparent={true}
                visible={isLivenessResultVisible}
                onRequestClose={() => {
                    setLivenessResultVisible(!isLivenessResultVisible);
                    navigation.navigate("Main");
                }}>
                <View style={AppStyleGeneral.centeredView}>
                    <View style={AppStyleGeneral.modalView}>
                        <Text style={AppStyleGeneral.captureText}>
                            {currentLivenessResultHeader}
                        </Text>
                        {currentLivenessResultDecision === "LIVE" ? (
                            <Image
                                source={Images.checkmark_32}
                                style={AppStyleGeneral.smallLogo}
                            />
                        ) : (
                            <Image
                                source={Images.error_32}
                                style={AppStyleGeneral.smallLogo}
                            />
                        )}
                        {currentLivenessResultDecision === "LIVE" ? (
                            <View style={AppStyleGeneral.boxedView}>
                                <Text style={AppStyleGeneral.modalText}>Facial liveness:</Text>
                                <Text style={[AppStyleLiveness.text, AppStyleLiveness.passed]}>
                                    {currentLivenessResultDecision}
                                </Text>
                            </View>
                        ) : (
                            <View style={AppStyleGeneral.boxedView}>
                                <Text style={AppStyleGeneral.modalText}>Facial liveness:</Text>
                                <Text style={[AppStyleLiveness.text, AppStyleLiveness.failed]}>
                                    {currentLivenessResultDecision}
                                </Text>
                            </View>
                        )}
                        <Image
                            source={{uri: currentLivenessResultImage}}
                            style={AppStyleGeneral.tinyFace}
                            resizeMode="contain"
                        />
                        <Pressable
                            style={[AppStyleGeneral.button, AppStyleGeneral.buttonClose]}
                            onPress={() => {
                                setLivenessResultVisible(!isLivenessResultVisible);
                                navigation.navigate("Main");
                            }}>
                            <Text style={AppStyleGeneral.textStyleOK}>OK</Text>
                        </Pressable>
                    </View>
                </View>
            </Modal>
            <Modal
                animationType="slide"
                transparent={true}
                visible={isOtherResponseVisible}
                // Close the modal and navigate to the "Main" screen when the modal is closed
                onRequestClose={() => {
                    setOtherResponseVisible(!isOtherResponseVisible);
                    navigation.navigate("Main");
                }}>
                <View style={AppStyleGeneral.centeredView}>
                    <View style={AppStyleGeneral.modalView}>
                        <Text style={AppStyleGeneral.captureText}>
                            {currentOtherResponseHeader}
                        </Text>
                        <View style={AppStyleGeneral.boxedView}>
                            <Text style={AppStyleGeneral.modalText}>
                                {currentOtherResponseBody}
                            </Text>
                        </View>
                        <Pressable
                            style={[AppStyleGeneral.button, AppStyleGeneral.buttonClose]}
                            onPress={() => {
                                // Close the modal and navigate to the "Main" screen when the modal is closed
                                setOtherResponseVisible(!isOtherResponseVisible);
                                navigation.navigate("Main");
                            }}>
                            <Text style={AppStyleGeneral.textStyleOK}>OK</Text>
                        </Pressable>
                    </View>
                </View>
            </Modal>
        </View>
    );
};

// Export the CaptureScreen function as the default export of this module
export default CaptureScreen;

Frequently Asked Questions and Troubleshooting

One of the most common ways of fixing issues with integrating the React Native Face Capture SDK is to do a clean build, please follow the following steps to complete this process:

Common Steps

  1. Clear Watchman caches watchman watch-del-all
  2. Delete node_modules folder rm -rf node_modules/
  3. Clean NPM cache npm cache clean --force
  4. Reinstall Node Modules npm install
  5. Reset Metro Bundler cache npx react-native start --reset-cache

For iOS

  1. Delete ios/Pods folder and ios/Podfile.lock rm -rf ios/Pods/ ios/Podfile.lock
  2. Pod deintegrate cd ios pod deintegrate
  3. Pod install pod install cd ..
  4. Run the project npx react-native run-ios

For Android

  1. Delete the build folder rm -rf android/app/build
  2. Delete the .gradle folder rm -rf android/.gradle
  3. Clean project cd android ./gradlew clean
  4. Build project ./gradlew build cd ..
  5. Run the project npx react-native run-android

Specific Errors and Issues

Swift

Problem:

Users may encounter issues with the FBReactNativeSpec module.

Solution:

Please follow the below step-by-step instructions to resolve any issues with the FBReactNativeSpec module.

  1. Check the Installation of the FBReactNativeSpec Module:
    Ensure that the FBReactNativeSpec module is properly installed. To verify, you need to run the pod install command within the /ios/ directory.
  2. Use the ‘which node’ Command:
    If the issue persists after verifying the installation, run the command which node in your terminal.
  3. Create a .xcode.env.local File:
    Based on the output from step 2, create a new .xcode.env.local file (if it does not exist) inside the /ios/ directory.
  4. Update the .xcode.env.local File:
    In the newly created .xcode.env.local file, add the line export NODE_BINARY=”(output of which node)”. Make sure to replace (output of which node) with the actual output obtained from step 2.

Implementing these steps should resolve any problems with the FBReactNativeSpec module. If the issues persist, feel free to reach out to us for further support.


Problem:

Users might experience an error stating ‘AwFaceCaptureFramework.h’ file not found.

Solution:

Follow the instructions below to troubleshoot and resolve this issue.

  1. Verify the Embedding and Signing of the AwFaceCaptureFramework:
    Ensure that the AwFaceCaptureFramework is correctly embedded and signed. The embedding and signing process may vary based on your setup, so please refer to your environment-specific documentation if needed.
  2. Check the Header File Reference in RCTFaceCaptureModule.h:
    Examine the reference to the AwFaceCaptureFramework.h file in your RCTFaceCaptureModule.h file. It should point to the correct header file. The correct import statement should be as follows: #import <AwFaceCaptureFrameWork/AwFaceCaptureFrameWork.h>

By following these steps, you should be able to resolve any issues regarding ‘AwFaceCaptureFramework.h’ file not found. However, if you continue to face issues, please reach out to us for further assistance.

Support

Please visit Aware Support if you require any additional support with integrating the Face Capture SDK into your project.

CONTENTS