Face Capture Flutter SDK

Estimated reading: 14 minutes 410 views

AwareID Face Capture Overview

The Face Capture SDK is used for identity and liveness verification using a facial scan.

The library is meant to be used with AwareID SaaS platform or Knomi's platform.

Important: Please note that this package alone does not do any liveness or identity verification. Those processes are performed server-side uses the output from this library

Requirements

  • iOS 11.0 and above
  • Android API level 21

Both iOS and Android platform will also need camera permissions set.

For iOS an entry must be made in the info.plist to gain camera permissions.

NSCameraUsageDescription
Is used to access your device's camera

For android the below lines must be added to gain camera permissions in the Android Manifest



Requesting runtime permission for camera

There are various ways you can request the use of the camera in your application but for our example we like to use Permission Handler. Instructions on how to use that can be found here

Installation

Installation of the AwareID Face Capture Package is done in 2 steps:

  1. Copy the face_capture folder to a desired location within your project.
  2. Reference the package in your project's pubspec.yaml file under dependencies

Below we show two different example of importing the library by placing a reference to the file in the pubspec.yaml

dependencies:
  flutter:
    sdk: flutter
    ...
    ...
  face_capture:
    path: face_capture

The above example has the "face_capture" folder at the root of the project folder.

dependencies:
  flutter:
    sdk: flutter
    ...
    ...
  face_capture:
    path: repo/face_capture

The above example has the "face_capture" folder in a folder called "repo" at the root of the project folder.

Once the folder is in the desired location within the project and the face_capture and document_capture sdks are referenced in the pubspec.yaml then perform a flutter pub get

Getting started

There are two ways to work with the FaceCapture SDK. The first way involves using the pre-built AWCameraPreview widget to instantiate a FaceCapture session and handle all information coming from the SDK. The second way involves instantiating an instance FaceCapture, providing a relevant workflow and then using an isolate to call the function returning the image frame, the area of interest, feedback and status of the SDK.

Both methods start with importing face_capture into the working file.

import 'package:face_capture/face_capture.dart';

Below is a table showing the pros and cons of each approach; manually initiating Face Capture and using the AWCamera Preview.

ProCon
AwCaptureWidget widget- Quick and easy to get up and running- little boilerplate code- handles multithreading operation- Limited user interface customizability
Manually initiating Face Capture- Maximum user interface customizability- Can further optimize for performance- Required to write more boilerplate code- Required to setup multithreading operation to get frames, feedback and status updates

The AWCaneraPreview widget provides a complete UI for the FaceCapture SDK inclusive of a customizable Appbar. This can be placed in any stateful/stateless widget and only requires a few configuration options to get up and running as seen below

 return AwCaptureWidget(
     awIDController: faceCaptureController.awIDController,
            showROI: true,
    );

The above code includes:

  • awIDController: this controller needs to be created and passed to the AwCaptureWidget to initialize Face Capture


  • showROI: this is a boolean which controls showing an overlay on the camera feed that shows where the user should place their face. It also uses colors to show when a capture is completed successfully or failed. Red being a failed capture and green being a passed capture.


AwIDController

The AwIDController has several parameters and callbacks to set to handle various Face Capture states as well as to configure the camera for capture.

The main states for and their respective states can be found here.

Below is a code snippet showing how the AwIDController is initialized:

   AwIDController(
        cameraPosition: CameraPosition.front,
        cameraOrientation: CameraOrientation.portrait,
        packageType: PackageType.balanced,
        facePublicKey: facePublicEncryptionKey
        getCapturedImage: (compliantImage) {
          //Returns a Uint8List of the captured image
        },
        getImageEncryptedPackage: (imagePackage) {
          //To get an encrypted image package you need to set facePublicKey to the
          //public encryption key for face capture.
        },
        onFaceCaptureCompleted: () async {
          log("onFaceCaptureCompleted");

        },
        onFaceCaptureTimedOut: () {
          log("onFaceCaptureTimedOut");
        },
        onFaceCaptureAborted: () {
          log("onFaceCaptureAborted");
        },
        onFaceCaptureStopped: () {
          log("onFaceCaptureStopped");
        });
  }

With these properties set the capture session immediately begins once the page holding the widget is navigated to. Once the capture is completed the getCapturedImagePackage function is triggered.

Another thing to note is there are two types of outputs from the Face Capture library for processing:

  1. encrypted package - to be used with AwareID
  2. unencrypted package - to be used with Knomi or another service for liveness and identity verification.

For use with AwareID you will have to pass in an encryption public key into the AwareID Controller, once this is done you can then set the getEncryptedPackage callback to use the encrypted package in a network call for further processing in AwareID.

Note: AwareID only accepts encrypted packages.

Complete example with AwCaptureWidget

import 'package:face_capture/face_capture.dart';
import 'package:flutter/widgets.dart';

void main() {
  runApp(const MyApp());
}

class MyApp extends StatelessWidget {
  const MyApp({super.key});

  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'Flutter Demo',
      theme: ThemeData(
        colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
        useMaterial3: true,
      ),
      home: const MyHomePage(title: 'Face Capture Demo'),
    );
  }
}

class MyHomePage extends StatefulWidget {
  const MyHomePage({super.key, required this.title});
  final String title;
  @override
  State createState() => _MyHomePageState();
}

class _MyHomePageState extends State {


  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        backgroundColor: Theme.of(context).colorScheme.inversePrimary,
        title: Text(widget.title),
      ),
      body: Center(
        child:  TextButton(onPressed: (){Navigator.push(context, MaterialPageRoute(builder: (context)=> FaceCaptureScreen()));}, child: Text("Face Capture Button"))
      ),

    );
  }
}


class FaceCaptureScreen extends StatefulWidget {
  const FaceCaptureScreen({super.key});

  @override
  State createState() => _FaceCaptureScreenState();
}

class _FaceCaptureScreenState extends State {
  late AwIDController awIDController;
  @override
  void initState() {
    awIDController = AwIDController(
        cameraPosition: CameraPosition.front,
        cameraOrientation: CameraOrientation.portrait,
        packageType: PackageType.balanced,
        getCapturedImage: (compliantImage) {},
        getImagePackage: (imagePackage) {},
        onFaceCaptureCompleted: () async {},
        onFaceCaptureTimedOut: () {},
        onFaceCaptureAborted: () {},
        onFaceCaptureStopped: () {});
    awIDController.startCapturePreview();
    super.initState();
  }

  @override
  Widget build(BuildContext context) {
    return AwCaptureWidget(
      awIDController: awIDController,
      showROI: true,
    );
  }
}

Step 1. Create Face Capture Object

Our first step in integration is to create a face capture object.

FaceCapture faceCapture = FaceCapture();

Step 2. Create a Workflow Object

Next we want to create a workflow. We do this by calling the workflowCreate method on our faceCapture object. This method takes a string corresponding to a workflow.
Examples of workflow options include but are not limited to:

  • Foxtrot
  • Charlie
  • Delta

Each workflow option performs the capture in a slightly different way.

Workflow workFlow = faceCapture.workflowCreate(workflowName);

Step 3. Adjust Workflow Object

The Capture Profile is a XML file that must be read into your project as a UTF-8 String. This

💡 This file is supplied in the sample project at assets/profiles/face_capture_foxtrot_client.xml

workFlow.setPropertyString(WorkflowProperty.USERNAME, mUsername);
workFlow.setPropertyDouble(WorkflowProperty.CAPTURE_TIMEOUT, captureTimeout);
workFlow.setPropertyString(WorkflowProperty.CAPTURE_PROFILE, mCaptureProfile);

4. Select a Camera

To perform a Face Capture, we of course use a device's camera to take an image and so we need to tell Face Capture which of the device's cameras to use, which orientation and whether it's one of the front or back facing cameras.
We first need to get the full list of cameras the device has by calling getCameraList on our faceCapture object and passing in the camera position (CameraPosition.front or CameraPosition.back) that we would like to use.
We then select one of the cameras in that position and set the orientation of the camera using camera orientation (CameraOrientation.portrait or CameraOrientation.landscape)

//Get the camera list for the front facing cameras
List? cameraList = faceCapture.getCameraList(CameraPosition.front);

//Choose the first camera in the list
Camera currentCamera = cameraList[0];

//Set the camera orientation to portrait
currentCamera.setOrientation(CameraOrientation.portrait);

Step 5. Begin a Capture Session

Now all we need to do is start our capture session. We do this my calling the startCaptureSession method on our Face Capture object and pass in our previous created workflow and the camera we just selected.

//Start capture session by passing in the workflow object and the selected camera
faceCapture.startCaptureSession(workFlow, currentCamera);

Step 6. Handle current capture session states and UI to the user

Now that we have started our capture session we need to allow our user to see our capture session image and feedback to be able to guide themselves to properly take a compliant image. Face Capture has a method for this called getCaptureSessionState().
This returns one object of CaptureState. A capture state contains 3 important pieces of data for how we handle our UI :

  • AutoCaptureFeedback: This is an enumeration that tells the user how to adjust their device and face to allow for a good capture. For a full list of these enumerations check AutoCaptureFeedback Enumerations
  • Current image: this is in the format of a Uint8List
  • CaptureSessionStatus: this data point shows the current state of the capture process as an enumeration. For a full list of these enumerations check CaptureSessionStatus Enumerations

Each of the above is gotten from the CaptureState by calling:

  • getFeedback()
  • getFrame()
  • getStatus()

Note: The CaptureState is refreshed for each frame analyzed by Face Capture (30 times per second). To show this to the user without breaking the UI we'll need to use multithreading to show the images on the UI.

To call these functions using multithreading in flutter we'll need to utilize a feature in Flutter called isolates.

Isolates allow a function to run on another thread and then pass data back to the main thread for viewing on the UI. The data must be of primitive types. To learn more about isolates please go here.

Let's start by creating a function the function that will get the current CaptureState and then pass that data back to the main thread. Keep in mind that this function must be a top level function. We'll call this function getCaptureSessionState() and pass in a List of the things we need.

getCaptureSessionState(List objects){}

We can then re-associate the objects in our list to the required types. If our list is [faceCapture, receivePort.sendPort, workflow] then our code looks like :

 getCaptureSessionState(List objects){
  FaceCapture faceCapture = objects[0];
  SendPort sendPort = objects[1]
  Workflow workflow = objects[2];
 }

We can then create while loop and use that to call getCaptureSessionState() on our Face Capture object in a while loop.

 getCaptureSessionState(List objects){
  FaceCapture faceCapture = objects[0];
  SendPort sendPort = objects[1]
  Workflow workflow = objects[2];

  while(true){
 CaptureState currentCaptureState = faceCapture.getCaptureSessionState();
  }
 }

Now we can check for the different capture session statuses and either send the capture session back to the main threat or just end the isolate. In the example below we check if the status is

 getCaptureSessionState(List objects){
  FaceCapture faceCapture = objects[0];
  SendPort sendPort = objects[1]
  Workflow workflow = objects[2];

  while(true){
 CaptureState currentCaptureState = faceCapture.getCaptureSessionState();
  if (currentCaptureState.getStatus() == CaptureSessionStatus.capturing) {
     AutoCaptureFeedback feedbackString = currentCaptureState.getFeedback();
      Uint8List uint8list = currentCaptureState.getFrame();
      CaptureSessionStatus statusString = currentCaptureState.getStatus();

      sendPort.send([feedbackString, uint8list, statusString]);
  }
  }
 }

We can then add an else if statement for the CaptureSessionStatus.complete status. In our example we call faceCapture.getServerPackage(workflow, PackageType.balanced) and break to ensure we close the while loop.

 getCaptureSessionState(List objects){
  FaceCapture faceCapture = objects[0];
  SendPort sendPort = objects[1]
  Workflow workflow = objects[2];

  while(true){
 CaptureState currentCaptureState = faceCapture.getCaptureSessionState();
  if (currentCaptureState.getStatus() == CaptureSessionStatus.capturing) {
     AutoCaptureFeedback feedbackString = currentCaptureState.getFeedback();
      Uint8List uint8list = currentCaptureState.getFrame();
      CaptureSessionStatus statusString = currentCaptureState.getStatus();

      sendPort.send([feedbackString, uint8list, statusString]);
  } else if currentCaptureState.getStatus() == CaptureSessionStatus.complete){
        String serverPackage =
          faceCapture.getServerPackage(workflow, PackageType.highUsability);
      sendPort.send([
        AutoCaptureFeedback.faceCompliant,
        faceCapture.getCapturedImage(workflow),
        CaptureSessionStatus.completed,
        serverPackage
      ]);
      break;
  } else if (currentCaptureState.getStatus() ==
        CaptureSessionStatus.timedOut){
       sendPort.send([
        AutoCaptureFeedback.noFaceDetected,
        Image.asset(""),
        CaptureSessionStatus.timedOut,
      ]);
      break;
  }
  }
 }

Now that we have our function for getting our capture session state we can now write a method to use that function. In this example we'll call this function useIsolate(). This function is an async function that will spawn our isolate, pass in our faceCapture and workflow objects as well as our sendPort.

This is where we will get our current capture session and start our isolate.

useIsolate() async {
    try {
      getStatusIsolate = await Isolate.spawn(
        getCaptureSessionState,
        [
          FaceCaptureImplementation.faceCapture,
          receivePort.sendPort,
          FaceCaptureImplementation.workflow
        ],
      );
    } on Error catch (e) {
      log(e.toString());
    }

    receivePort.listen((message) {
      if (message[0] == AutoCaptureFeedback.faceCompliant) {
        faceCompliant.value = true;
        frameAnalyzer(message[1]);
      }
      if (message[2] == CaptureSessionStatus.completed) {
      } else if (message[2] == CaptureSessionStatus.timedOut) {
        streamController.add([
          AutoCaptureFeedback.noFaceDetected,
          null,
          CaptureSessionStatus.timedOut
        ]);
        handleCaptureSession(message);
        return;
      }

      streamController.add(message);
      handleCaptureSession(message);
      log(message[2].toString());
    });
  }

Face Capture Methods

Stop a Capture Session

If we can start a capture session we must be able to stop a capture session as well. We do this by calling the stopCaptureSession method on our Face Capture object.

//Stop capture session
faceCapture.stopCaptureSession();

Get the Capture region

Get capture region returns and Rectangle which indicates shows the area on the image that's being assessed using Face Capture.

Rectangle currentCaptureRegion =  = mFaceCapture.captureSessionGetCaptureRegion();

Get the current Capture Session State

Returns the current state of capture including the current image, feedback enumeration and the face capture status.

mCurrentCaptureState = mFaceCapture.getCaptureSessionState();

Get the Capture State’s Image

mCurrentCaptureSessionFrame = mCurrentCaptureState.getFrame();

Get the Capture State’s Feedback

mCurrentCaptureSessionFrame = mCurrentCaptureState.getFeedback();

Get the Capture State’s Status

mCurrentCaptureSessionFrame = mCurrentCaptureState.getStatus();

Get the Server Package (unencrypted)

String currentCaptureServerPackage = mFaceCapture.getServerPackage(mWorkFlow,
˓→mPackageType);

Get the Encrypted Server Package

String currentCaptureServerPackage = mFaceCapture.
˓→getEncryptedServerPackage(mEncryptionType, mPublicKey, mWorkFlow, mPackageType);

Enable Auto Capture

faceCapture.captureSessionEnableAutocapture(true);

Glossary

AutoCaptureFeedback enumerations

AutoCaptureFeedbackDescriptionRecommendation
AutoCaptureFeedback.faceCompliantThis message is seen when face capture was able to take a viable capture.Continue with processing the capture to get either an encrypted package or a regular package
AutoCaptureFeedback.noFaceDetectedNo face was detected in the capture session imagePosition device/face so that the user's face is within the region of interest
AutoCaptureFeedback.multipleFacesDetectedMore than one face was detected in the capture session imageThere should only be one person attempting to capture their face at a time. Please move other people out of the frame
AutoCaptureFeedback.invalidPoseThe face in the frame is in a position that is not accepted. Face capture wants an image that is straight on and within the region of interestPosition face/device to have the user's face straight on in frame with both ears showing and eyes level
AutoCaptureFeedback.faceTooFarThe face is too far away from the cameraMove face closer to camera so that it is fully inside the region of interest
AutoCaptureFeedback.faceTooCloseFace is too close to the cameraMove face further away from camera so that it is fully inside the region of interest
AutoCaptureFeedback.faceOnLeftFace is too far to the leftMove face towards the right of camera to get face in the region of interest
AutoCaptureFeedback.faceOnRightFace is too far to the rightMove face towards the left of camera to get face in the region of interest
AutoCaptureFeedback.faceTooHighFace is too far to the top of frameMove face down in relation to the camera to get face in the region of interest
AutoCaptureFeedback.faceTooLowFace is too far to the bottom of frameMove face up in relation to the camera to get face in the region of interest
AutoCaptureFeedback.insufficientLightingThere's not enough light in your current locationMove to an area with more light
AutoCaptureFeedback.leftEyeClosedThe left eye of the user is closedOpen both eyes to complete capture
AutoCaptureFeedback.rightEyeClosedThe right eye of the user is closedOpen both eyes to complete capture
AutoCaptureFeedback.darkGlassesDetectedThe user is wearing dark glasses and Face Capture can't identify their eyesRemove dark glasses to complete capture

CaptureSessionStatus enumerations

CaptureSessionStatusDescriptionRecommendation
CaptureSessionStatus.completedA viable image has been captured by Face CaptureContinue with processing
CaptureSessionStatus.timedOutThe capture session was not able to capture a viable image within the time out time allowedHandle UI to either cancel the session for the user or attempt to restart the session
CaptureSessionStatus.abortedThe Capture session was unsuccessful. The application should handle this case as a camera/hardware failure.Handle UI to either cancel the session for the user or attempt to restart the session
CaptureSessionStatus.stopped
CaptureSessionStatus.capturingIndicates that a Face Capture session is active and attempting to capture a compliant frameShow AutoCaptureFeedback interpretation on screen to guide the user to capture a compliant face
CaptureSessionStatus.postCaptureProcessingIndicates that a compliant face image was captured and Face Capture is now performing any set postprocessingHandle post processing
CaptureSessionStatus.idleThe current session is not actively capturing a face but the session is still openEither start a capture or end the capture session
CONTENTS