Face Capture (Flutter)

Estimated reading: 4 minutes 247 views


The Face Capture SDK is used for identify and liveness verfication using a facial scan.

The package is meant to be used with AwareID SaaS platform.

Requirements

  • iOS 11.0 and above
  • Android API level 21

Both iOS and Android platform will also need camera permissions set.

For iOS an entry must be made in the info.plist to gain camera permissions.

<key>NSCameraUsageDescription</key>
<string>Is used to access your device's camera</string>

For android the below lines must be added to gain camera permissions in the Android Manifest

<uses-feature android:name="android.hardware.camera"/>   
<uses-permission android:name="android.permission.CAMERA"/>

Installation

Installation of the AwareID Face Capture Package is done in 2 steps:
1) Copy the face_capture folder to a desired location within your project.
2) Reference the package in your project’s pubspec.yaml file under dependencies

dependencies:
  flutter:
    sdk: flutter
    ...
    ...
  face_capture:
    path: face_capture

The above example has the “face_capture” folder at the root of the project folder.

dependencies:
  flutter:
    sdk: flutter
    ...
    ...
  face_capture:
    path: repo/face_capture

The above example has the “face_capture” folder in a folder called “repo” at the root of the project folder.

Once the folder is in the desired location within the project and the face_capture and document_capture sdks are referenced in the pubspec.yaml then perfrom a flutter pub get

Getting started

There are two ways to work with the FaceCapture SDK.
The first way involves using the pre-built AWCameraPreview widget to instantiate a FaceCapture session and handle all information coming from the SDK.
The second way involves instantiating an instance FaceCapture, providing a relevant workflow and then using an isolate to call the function returning the image frame, the area of interest, feedback and status of the SDK.

Both methods start with importing face_capture into the working file.

import 'package:face_capture/face_capture.dart';

Pros of using AWCameraPreview widget

  • Quick and easy to get up and running
  • little boilerplate code
  • handles multithreading operation

Cons of using AWCameraPreview widget

  • Limited user interface customizability

Pros of manaually initiating Face Capture

  • Maximum user interface customizability
  • Can further optimize for performance

Cons of manually initiating Face Capture

  • required to write more boilerplate code
  • required to setup multithreading operation to get frames, feedback and status updates

Using AWCameraPreview widget

The AWCaneraPreview widget provides a complete UI for the FaceCapture SDK inclusive of a customizable Appbar. This can be placed in any stateful/stateless widget and only requires a few configuration options to get up and running as seen below

 return AWCameraPreview(
      encryptionServerKey: publicKey,
      getCapturedImagePackage: (capturedImagePackage) async {
        performRegistration(capturedImagePackage);
      },
      username: "someUsername",
      captureTimeOut: 0.0,
      profileProperty: readCaptureProfileString(),
    );

The above code includes:

  • encryptionServerKey: this is the public key (as a String) given by the server when a facecapture is initiated.
  • getCapturedImagePackage: is a callback function which contains the encrypted face data to be transmitted to the server.
  • username: a name given to the face capture session (can be anything).
  • captureTimeout: this is a double used to set the capture timeout of the face capture object in seconds.
  • profileProperty: is the given XML file witht he capture profile. With these properties set the capture session immediately begins once the page holding the widget is navigated to. Once the capture is completed the getCapturedImagePackage function is triggered.

Manually instantiating Face Capture

1. Create Face Capture Object

Our first step in integration is to create a face capture object

mFaceCapture = FaceCapture();

2. Create a Workflow Object

mWorkFlow = mFaceCapture.workflowCreate(workflowName);

3. Adjust Workflow Object

The Capture Profile is a XML file that must be read into your project as a UTF-8 String. This đź’ˇ This file is supplied in the sample project at `assets/profiles/face_capture_foxtrot_client.xml`

mWorkFlow.setPropertyString(WorkflowProperty.USERNAME, mUsername);
mWorkFlow.setPropertyDouble(WorkflowProperty.CAPTURE_TIMEOUT, mCaptureTimeout);
mWorkFlow.setPropertyString(WorkflowProperty.CAPTURE_PROFILE, mCaptureProfile);

4. Select a Camera

mCameraList = mFaceCapture.getCameraList(mCameraPosition);
mCurrentCamera = mCameraList[0];
mCurrentCamera.setOrientation(mCameraOrientation);

5. Begin a Capture Session

mFaceCapture.startCaptureSession(mWorkFlow, mCurrentCamera);

6. Stop a Capture Session

mFaceCapture.stopCaptureSession();

7. Get the Capture region

mCurrentCaptureRegion = mFaceCapture.captureSessionGetCaptureRegion();

8. Get the current Capture Session State

mCurrentCaptureState = mFaceCapture.getCaptureSessionState();

9. Get the Capture State’s Image

mCurrentCaptureSessionFrame = mCurrentCaptureState.getFrame();

10. Get the Capture State’s Feedback

mCurrentCaptureSessionFrame = mCurrentCaptureState.getFeedback();

11. Get the Capture State’s Status

mCurrentCaptureSessionFrame = mCurrentCaptureState.getStatus();

12. Get the Server Package (unencrypted)

mCurrentCaptureServerPackage = mFaceCapture.getServerPackage(mWorkFlow,
˓→mPackageType);

13. Get the Encrypted Server Package

mCurrentCaptureServerPackage = mFaceCapture.
˓→getEncryptedServerPackage(mEncryptionType, mPublicKey, mWorkFlow, mPackageType);

14. Enable Auto Capture

mFaceCapture.captureSessionEnableAutocapture(true);


Full Flutter API Documentation

CONTENTS