API

The following sections provide an overview of key functionalities of Visage|SDK and give links to most important classes and relevant sample projects. The detailed information is found in the documentation of each class, reached through links in this section or links on the side menu.

Visage|SDK includes following main functionalities:

Visage|SDK also provides a high-level API for augmented reality (AR), allowing very simple implementation of AR applications such as virtual eyewear try-on.

Configuring neural network runners

All listed APIs use configurable neural networks to process and analyze facial images.

Additional configuration file NeuralNet.cfg is provided in www/lib folder. It allows for specifying the desired backend that the runner will use for inference.
Following backends values are supported:
To choose desired backend make sure to preload NeuralNet.cfg file to virtual file system. Default backend is AUTO.

More information about configuring neural network runners can be found in VisageTracker Configuration Manual, chapter 3. Configuring neural network runners.

The following sections provide an overview of the APIs and give links to most important classes.

Facial features tracking (visageSDK.js)

Visage tracker tracks multiple faces and facial features in video sequences and outputs 3D head pose, facial expression, gaze direction, facial feature points and full textured 3D face model. The tracker is fully configurable in terms of performance, quality, tracked features and facial actions, as well as other options, allowing in effect a variety of customized trackers suited to different applications. Several common configurations are delivered. Details about configuring the tracker can be found in the VisageTracker Configuration Manual.

Main classes


Facial features detection (visageSDK.js)

The class VisageDetector detects faces and facial features in input images. The results are, for each detected face, the 3D head pose, the coordinates of facial feature points, e.g. chin tip, nose tip, lip corners etc. and 3D face model fitted to the face. The results are returned in one or more FaceData objects, one for each detected face.

Main classes


Screen space gaze tracking (visageSDK.js)

The class VisageGazeTracker adds screen space gaze tracking on top of facial features/head tracking. Screen space gaze tracking feature estimates gaze position (the location on the screen where the user is looking) in normalized screen coordinates. Estimations are returned as part of a FaceData object.

Main classes


Facial feature analysis (visageSDK.js)

The class VisageFaceAnalyser contains face analysis algorithms capable of estimating gender and emotion from facial images. For gender estimation it returns estimated gender and for emotions estimation it returns the probability of each of estimated facial emotions: anger, disgust, fear, happiness, sadness, surprise and neutral.

Main classes


Face recognition (visageSDK.js)

The class VisageFaceRecognition contains a face recognition algorithm capable of measuring similarity between human faces and recognizing a person's identity from frontal facial image (yaw angle approximately from -20 to 20 degrees) by comparing it to previously stored faces.

Main classes


Liveness (visageSDK.js)

Liveness system is used to differentiate between a live person in the live video stream, as opposed to a still image. It is used in combination with face recognition to verify that the person is actually in front of the camera. This is accomplished by prompting a person to perform a specific set of facial actions and verifying that the actions have actually been performed. System allows you to use the following actions: eyebrows raise, blink, smile.

Main classes