new FaceData()
Face data structure, used as container for all face tracking and detection results.
This structure is passed as parameter of the Tracker.track() or Detector.detectFeatures() method. The method copies latest tracking or detection results into it.
When filling the structure with data some members are filled while some are left undefined depending on tracking/detection status.
Note: After the end of use FaceData object needs to be deleted to release the allocated memory. Example:
Since face descriptor extraction takes FaceData information as input and UI thread and background worker thread do not have shared memory, FaceData information must be prepared (serialized) so it can be sent via postMassage interface. FaceData class offers 3 different helper methods for serialization/deserialization:
FaceData.serializeJson() method returns FaceData information as a JSON formatted string. This method performs the slowest but can easily be used to string along multiple FaceData objects.
FaceData.serializeBuffer() method returns FaceData information as a TypedArray. This method exploits the structured clone algorithm to transfer data to a background worker thread more efficiently and features better performance.
Note that in order to fully utilize the performance boost, the TypedArray returned from the FaceData.serializeBuffer() method should be first copied to a local javascript TypedArray before it is sent via postMessage().
FaceData.serializeAnalysis() method returns a compact version of FaceData information as a TypedArray. This method utilizes the same principles as FaceData.serializeBuffer() method but retrieves the relevant data to FaceAnalysis only (age, gender, emotion estimation and face recognition). It is recommended to use this method in cases where WebWorker performs FaceAnalyis or FaceRecognition exclusively.
Example code can be found under methods' description.
It is important to remember that when structured clone algorithm is used to transfer data, ownership of the data is also being transfered to a background worker. If the ownership of an object is transferred, it becomes unusable (neutered) in the context it was sent from and becomes available only to the worker it was sent to. Thus, when sending FaceData information to multiple web workers be sure to clone the array for every background worker thread first. Example:
The tracker status is returned as the return value of the Tracker.track() function. The following table describes the possible states of the tracker, and lists active member variables (those that are filled with data) for each status.
Smoothing
The tracker can apply a smoothing filter to tracking results to reduce the inevitable tracking noise. Smoothing factors are adjusted separately for different parts of the face. The smoothing settings in the supplied tracker configurations are adjusted conservatively to avoid delay in tracking response, yet provide reasonable smoothing. For further details please see the smoothing_factors parameter array in the VisageTracker Configuration Manual.
Detection result is returned from Detector.detectFeatures() method. The following table describes possible output from the detector and the list of active variables (those that are filled with data). All other variables are left undefined.
3D head pose
The 3D head pose consists of head translation and rotation. It is available as absolute pose of the head with respect to the camera.
The following member variables return the head pose:
Both Tracker and Detector return the 3D head pose.
Facial expression
Facial expression is available in a form of Action Units.
Action Units (AUs) are the internal representation of facial motion used by the tracker. It is therefore more accurate to use the AUs directly. It should be noted that AUs are fully configurable in the tracker configuration files (specifically, in the 3D model file, .wfm).
The following member variables return Action Units data:
Only Tracker returns the facial expression; Detector leaves these variables undefined.
Gaze direction and eye closure
Gaze direction is available in local coordinate system of the person's face or global coordinate system of the camera. Eye closure is available as binary information (OPEN/CLOSED).
The following member variables return gaze direction and eye closure:
Only Tracker returns gaze direction and eye closure; Detector leaves these variables undefined.
Facial feature points
2D or 3D coordinates of facial feature points, as defined by the MPEG-4 FBA standard, are available.
3D coordinates are available in global coordinate system or relative to the origin of the face (i.e. the point in the centre between the eyes in the input image).
Facial features are available through the following member variables:
Both VisageTracker and VisageDetector return facial feature points.
3D face model
The 3D face model is fitted in 3D to the face in the current image/video frame. The model is a single textured 3D triangle mesh. The texture of the model is the current image/video frame.
The 3D face model is fully configurable and can even be replaced by a custom model; it can also be disabled for performance reasons if not required. Please see the VisageTracker Configuration Manual for further details. The default model is illustrated in the following image:
This means that, when the model is drawn using the correct perspective it exactly recreates the facial part of the image. The correct perspective is defined by camera focal length, width and height of the input image or the video frame, model rotation and translation.
There are multiple potential uses for the face model. Some ideas include, but are not limited to:
Note that the vertices of the face model may not always exactly correspond to the facial feature points obtained from tracking/detection (featurePoints3D). For applications where the precise positioning of the facial feature points is recommended (e.g. virtual make-up), it is important to use the featurePoints3D and not the face model.
The 3D face model is contained in the following members:
Screen space gaze position is contained in member gazeData.
Only face tracker returns screen space gaze position.
This structure is passed as parameter of the Tracker.track() or Detector.detectFeatures() method. The method copies latest tracking or detection results into it.
When filling the structure with data some members are filled while some are left undefined depending on tracking/detection status.
Note: After the end of use FaceData object needs to be deleted to release the allocated memory. Example:
<script>
faceData = new VisageModule.FaceData();
...
faceData.delete();
</script>
Communicating with Web Workers
It is sometimes necessary to pass FaceData information from the UI thread to a background thread (Web Worker). For example, extracting a face descriptor with FaceRecongition API is a computationally demanding task and will block the UI thread. It is recommended to utilize a worker thread to do the work of face descriptor extraction and communicate the results back to the UI thread.Since face descriptor extraction takes FaceData information as input and UI thread and background worker thread do not have shared memory, FaceData information must be prepared (serialized) so it can be sent via postMassage interface. FaceData class offers 3 different helper methods for serialization/deserialization:
- FaceData.serializeJson()/FaceData.deserializeJson()
- FaceData.serializeBuffer()/FaceData.deserializeBuffer()
- FaceData.serializeAnalysis()/FaceData.deserializeAnalysis()
FaceData.serializeJson() method returns FaceData information as a JSON formatted string. This method performs the slowest but can easily be used to string along multiple FaceData objects.
FaceData.serializeBuffer() method returns FaceData information as a TypedArray. This method exploits the structured clone algorithm to transfer data to a background worker thread more efficiently and features better performance.
Note that in order to fully utilize the performance boost, the TypedArray returned from the FaceData.serializeBuffer() method should be first copied to a local javascript TypedArray before it is sent via postMessage().
FaceData.serializeAnalysis() method returns a compact version of FaceData information as a TypedArray. This method utilizes the same principles as FaceData.serializeBuffer() method but retrieves the relevant data to FaceAnalysis only (age, gender, emotion estimation and face recognition). It is recommended to use this method in cases where WebWorker performs FaceAnalyis or FaceRecognition exclusively.
Example code can be found under methods' description.
It is important to remember that when structured clone algorithm is used to transfer data, ownership of the data is also being transfered to a background worker. If the ownership of an object is transferred, it becomes unusable (neutered) in the context it was sent from and becomes available only to the worker it was sent to. Thus, when sending FaceData information to multiple web workers be sure to clone the array for every background worker thread first. Example:
...
//Access pixel data from canvas element
imageData = canCon.getImageData(0,0, mWidth, mHeight).data;
...
//Create view to pixel data
var imageDataBuffer = imageData.buffer;
//
var imageDataBufferWorker1 = copy(imageDataBuffer);
var imageDataBufferWorker2 = copy(imageDataBuffer);
//send image data to web worker 1:
worker1.postMessage(
{
imageData: imageDataBufferWorker1
},
[
imageDataBufferWorker1
]);
//send image data to web worker 2:
worker2.postMessage(
{
imageData: imageDataBufferWorker2
},
[
imageDataBufferWorker2
]);
function copy(src)
{
var dst = new ArrayBuffer(src.byteLength);
new Uint8Array(dst).set(new Uint8Array(src));
return dst;
}
...
Obtaining tracking data
The tracker returns these main classes of data:- 3D head pose
- facial expression
- gaze direction
- facial feature points
- full 3D face model
The tracker status is returned as the return value of the Tracker.track() function. The following table describes the possible states of the tracker, and lists active member variables (those that are filled with data) for each status.
TRACKER STATUS | DESCRIPTION | ACTIVE VARIABLES |
0 (TRACK_STAT_OFF) | There has been an error in reading the given image, i.e. passed image data array does not correspond to initialized buffer sizes, or there has been a licensing error. | N/A |
1 (TRACK_STAT_OK) | Tracker is tracking normally. | trackingQuality, frameRate, cameraFocus, faceTranslation, faceRotation, faceRotationApparent, faceAnimationParameters, actionUnitCount, actionUnitsUsed, actionUnits, actionUnitsNames, featurePoints3D, featurePoints3DRelative, featurePoints2D, faceModelVertexCount, faceModelVertices, faceModelVerticesProjected, faceModelTriangleCount, faceModelTriangles, faceModelTextureCoords |
2 (TRACK_STAT_RECOVERING) | Tracker has lost the face and is attempting to recover and continue tracking. If it can not recover within the time defined by the parameter recovery_timeout in the tracker configuration file, the tracker will fully re-initialize (i.e. it will assume that a new user may be present). | frameRate, cameraFocus |
3 (TRACK_STAT_INIT) | Tracker is initializing. The tracker enters this state immediately when it is started, or when it has lost the face and failed to recover (see TRACK_STAT_RECOVERING above). The initialization process is configurable through a number of parameters in the tracker configuration file. | frameRate, cameraFocus |
Smoothing
The tracker can apply a smoothing filter to tracking results to reduce the inevitable tracking noise. Smoothing factors are adjusted separately for different parts of the face. The smoothing settings in the supplied tracker configurations are adjusted conservatively to avoid delay in tracking response, yet provide reasonable smoothing. For further details please see the smoothing_factors parameter array in the VisageTracker Configuration Manual.
Obtaining detection data
The detector returns these main classes of data for each detected face:- 3D head pose
- gaze direction
- eye closure
- facial features points
- full 3D face model, textured
Detection result is returned from Detector.detectFeatures() method. The following table describes possible output from the detector and the list of active variables (those that are filled with data). All other variables are left undefined.
DETECTION RESULT | DESCRIPTION | ACTIVE VARIABLES |
0 | Detector did not find any faces in the image, i.e. passed image data array does not correspond to initialized buffer sizes, or there has been a licensing error. | N/A |
N > 0 | Detector detected N faces in the image. |
For first N FaceData objects in the array:
cameraFocus,
faceTranslation,
faceRotation,
faceRotationApparent
featurePoints3D,
featurePoints3DRelative,
featurePoints2D,
faceModelVertexCount,
faceModelVertices,
faceModelVerticesProjected,
faceModelTriangleCount,
faceModelTriangles,
faceModelTextureCoords
For other FaceData objects in the array: N/A |
Returned data
The following sections give an overview of main classes of data that may be returned in FaceData by the tracker or the detector, and pointers to specific data members.3D head pose
The 3D head pose consists of head translation and rotation. It is available as absolute pose of the head with respect to the camera.
The following member variables return the head pose:
Both Tracker and Detector return the 3D head pose.
Facial expression
Facial expression is available in a form of Action Units.
Action Units (AUs) are the internal representation of facial motion used by the tracker. It is therefore more accurate to use the AUs directly. It should be noted that AUs are fully configurable in the tracker configuration files (specifically, in the 3D model file, .wfm).
The following member variables return Action Units data:
Only Tracker returns the facial expression; Detector leaves these variables undefined.
Gaze direction and eye closure
Gaze direction is available in local coordinate system of the person's face or global coordinate system of the camera. Eye closure is available as binary information (OPEN/CLOSED).
The following member variables return gaze direction and eye closure:
Only Tracker returns gaze direction and eye closure; Detector leaves these variables undefined.
Facial feature points
2D or 3D coordinates of facial feature points, as defined by the MPEG-4 FBA standard, are available.
3D coordinates are available in global coordinate system or relative to the origin of the face (i.e. the point in the centre between the eyes in the input image).
Facial features are available through the following member variables:
Both VisageTracker and VisageDetector return facial feature points.
3D face model
The 3D face model is fitted in 3D to the face in the current image/video frame. The model is a single textured 3D triangle mesh. The texture of the model is the current image/video frame.
The 3D face model is fully configurable and can even be replaced by a custom model; it can also be disabled for performance reasons if not required. Please see the VisageTracker Configuration Manual for further details. The default model is illustrated in the following image:
This means that, when the model is drawn using the correct perspective it exactly recreates the facial part of the image. The correct perspective is defined by camera focal length, width and height of the input image or the video frame, model rotation and translation.
There are multiple potential uses for the face model. Some ideas include, but are not limited to:
- Draw textured model to achieve face paint or mask effect.
- Draw the 3D face model into the Z buffer to achieve correct occlusion of virtual objects by the head in AR applications.
- Use texture coordinates to cut out the face from the image.
- Draw the 3D face model from a different perspective than the one in the actual video.
- Insert the 3D face model into another video or 3D scene.
Note that the vertices of the face model may not always exactly correspond to the facial feature points obtained from tracking/detection (featurePoints3D). For applications where the precise positioning of the facial feature points is recommended (e.g. virtual make-up), it is important to use the featurePoints3D and not the face model.
The 3D face model is contained in the following members:
- faceModelVertexCount
- faceModelVertices
- faceModelTriangleCount
- faceModelTriangles
- faceModelTextureCoords
Screen space gaze position
Screen space gaze position is available if the tracker was provided with calibration repository and screen space gaze estimator is working in real time mode. Otherwise tracker returns default screen space gaze data. Default gaze position is centre of screen. Default estimator state is off (ScreenSpaceGazeData.inState == 0). Please refer to VisageGazeTracker documentation for instructions on usage of screen space gaze estimator.Screen space gaze position is contained in member gazeData.
Only face tracker returns screen space gaze position.
Members
-
trackingQuality :number
-
Tracking quality level.
This variable is set only while tracker is tracking (TRACK_STAT_OK) or if the detector has detected a face.
Estimated tracking quality level for the current frame. The value is between 0 and 1.
Type:
- number
-
frameRate :number
-
The frame rate of the tracker, in frames per second, measured over last 10 frames.
This variable is set while tracker is running, i.e. while tracking status is not TRACK_STAT_OFF. Face detector leaves this variable undefined.
Type:
- number
-
timeStamp :number
-
Time stamp of the current video frame.
This variable is set while tracker is running, i.e. while tracking status is not TRACK_STAT_OFF. Face detector leaves this variable undefined.
It returns the value passed to timeStamp argument in track method if it is different than -1, otherwise it returns time, in milliseconds, measured from the moment when tracking started.Type:
- number
-
shapeUnitCount :number
-
Number of facial Shape Units.
This variable is set while tracker is running, i.e. while tracking status is not TRACK_STAT_OFF or if the detector has detected a face and only if au_fitting_model parameter in the configuration file is set (see VisageTracker Configuration Manual for details).
Number of shape units that are defined for current face model.
Type:
- number
- See:
-
actionUnitCount :number
-
Number of facial action units.
This variable is set only while tracker is tracking (TRACK_STAT_OK) and only if au_fitting_model parameter in the configuration file is set (see VisageTracker Configuration Manual for details). Face detector leaves this variable undefined.
Number of action units that are defined for current face model.
Type:
- number
-
cameraFocus :number
-
Focal distance of the camera, as configured in the tracker/detector configuration file.
This variable is set while tracker is running (any status other than TRACK_STAT_OFF), or if the detector has detected a face.
Focal length of a pinhole camera model used as approximation for the camera used to capture the video in which tracking is performed. The value is defined as distance from the camera (pinhole) to an imaginary projection plane where the smaller dimension of the projection plane is defined as 2, and the other dimension is defined by the input image aspect ratio. Thus, for example, for a landscape input image with aspect ratio of 1.33 the imaginary projection plane has height 2 and width 2.66.
This value is used for 3D scene set-up and accurate interpretation of tracking data.
Corresponding FoV (field of view) can be calculated as follows:
fov = 2 * atan( size / (2*cameraFocus) ), where size is 2 if width is larger than height and 2*height/width otherwise.
This member corresponds to the camera_focus parameter in the tracker/detector configuration file.Type:
- number
-
faceModelVertexCount :number
-
Number of vertices in the 3D face model.
This variable is set only while tracker is tracking (TRACK_STAT_OK) or if the detector has detected a face, providing that the mesh_fitting_model parameter is set in the configuration file (see VisageTracker Configuration Manual for details).
Type:
- number
-
faceModelTriangleCount :number
-
Number of triangles in the 3D face model.
This variable is set only while tracker is tracking (TRACK_STAT_OK) or if the detector has detected a face, providing that the mesh_fitting_model parameter is set in the configuration file (see VisageTracker Configuration Manual for details).
Type:
- number
-
gazeQuality :number
-
The session level gaze tracking quality.
Quality is returned as a value from 0 to 1, where 0 is the worst and 1 is the best quality. The quality is 0 also when the gaze tracking is off or calibrating.Type:
- number
-
gazeData :ScreenSpaceGazeData
-
Structure holding screen space gaze position and quality for current frame. This variable is set only while tracker is tracking (TRACK_STAT_OK). Face detector leaves this variable undefined.
Position values are dependent on estimator state. Please refer to VisageGazeTracker and ScreenSpaceGazeData documentation for more details.
Type:
-
faceScale :number
-
Scale of face bounding box expressed in pixels.
Type:
- number
Methods
-
getFaceTranslation() → {Float32Array}
-
Translation of the head from the camera.
This variable is set only while tracker is tracking (TRACK_STAT_OK) or if the detector has detected a face.
Translation is expressed with three coordinates x, y, z. The coordinate system is such that when looking towards the camera, the direction of x is to the left, y iz up, and z points towards the viewer - see illustration below. The global origin (0,0,0) is placed at the camera. The reference point on the head is in the centre between the eyes.
If the value set for the camera focal length in the tracker configuration file corresponds to the real camera used, the returned coordinates shall be in meters; otherwise the scale of the translation values is not known, but the relative values are still correct (i.e. moving towards the camera results in smaller values of z coordinate).
Aligning 3D objects with the face
The translation, rotation and the camera focus value together form the 3D coordinate system of the head in its current position and they can be used to align 3D rendered objects with the head for AR or similar applications.
The relative facial feature coordinates (featurePoints3DRelative) can then be used to align rendered 3D objects to the specific features of the face, like putting virtual eyeglasses on the eyes. Samples projects demonstrate how to do this, including full source code.
- See:
Returns:
- Type
- Float32Array
-
getFaceRotation() → {Float32Array}
-
Rotation of the head.
This variable is set only while tracker is tracking (TRACK_STAT_OK).
This is the current estimated rotation of the head, in radians. Rotation is expressed with three values determining the rotations around the three axes x, y and z, in radians. This means that the values represent the pitch, yaw and roll of the head, respectively. The zero rotation (values 0, 0, 0) corresponds to the face looking straight ahead along the camera axis. Positive values for pitch correspond to head turning down. Positive values for yaw correspond to head turning right in the input image. Positive values for roll correspond to head rolling to the left in the input image, see illustration below. The values are in radians.
Note: The order to properly apply these rotations is y-x-z.
- See:
Returns:
- Type
- Float32Array
-
getFaceRotationApparent() → {Float32Array}
-
Rotation of the head from the camera viewpoint.
This variable is set only while tracker is tracking (TRACK_STAT_OK).
This is the current estimated apparent rotation of the head, in radians. Rotation is expressed with three values determining the rotations around the three axes x, y and z, in radians. This means that the values represent the pitch, yaw and roll of the head, respectively. The zero apparent rotation (values 0, 0, 0) corresponds to the face looking straight into the camera i.e. a frontal face. Positive values for pitch correspond to head turning down. Positive values for yaw correspond to head turning right in the input image. Positive values for roll correspond to head rolling to the left in the input image. The values are in radians.
Note: The order to properly apply these rotations is y-x-z.Returns:
- Type
- Float32Array
-
getFaceBoundingBox() → {VsRect}
-
Face bounding box.
This variable is set only while tracker is tracking (TRACK_STAT_OK) or if the detector has detected a face.
The bounding box is rectangular determined by the x and y coordinates of the upper-left corner and the width and height of the rectangle. Values are expressed in pixels.
Returns:
- structure with x, y, width and height members.- Type
- VsRect
-
getIrisRadius() → {Float32Array}
-
Iris radius values in px.
This variable is set only while tracker is tracking (TRACK_STAT_OK) or if the detector has detected a face.
The value with index 0 represents the iris radius of the left eye. The value with index 1 represents the iris radius of the right eye. If iris is not detected, the value is set to -1.
Returns:
- Type
- Float32Array
-
getGazeDirection() → {Float32Array}
-
Gaze direction.
This variable is set only while tracker is tracking (TRACK_STAT_OK) or if the detector has detected a face.
This is the current estimated gaze direction relative to the person's head. Direction is expressed with two values x and y, in radians. Values (0, 0) correspond to person looking straight. X is the horizontal rotation with positive values corresponding to person looking to his/her left. Y is the vertical rotation with positive values corresponding to person looking down.
- See:
Returns:
- Type
- Float32Array
-
getGazeDirectionGlobal() → {Float32Array}
-
Global gaze direction, taking into account both head pose and eye rotation.
This variable is set only while tracker is tracking (TRACK_STAT_OK) or if the detector has detected a face.
This is the current estimated gaze direction relative to the camera axis. Direction is expressed with three values determining the rotations around the three axes x, y and z, i.e. pitch, yaw and roll. Values (0, 0, 0) correspond to the gaze direction parallel to the camera axis. Positive values for pitch correspond to gaze turning down. Positive values for yaw correspond to gaze turning right in the input image. Positive values for roll correspond to face rolling to the left in the input image, see illustration below.
The values are in radians.
The global gaze direction can be combined with eye locations to determine the line(s) of sight in the real-world coordinate system with the origin at the camera. To get eye positions use featurePoints3D and FDP::getFP() function, e.g.:
var left_eye_fp = trackData.getFeaturePoints2D().getFP(3,5); var right_eye_fp = trackData.getFeaturePoints2D().getFP(3,6); var left_eye_pos = []; var right_eye_pos = []; if (left_eye_fp.defined === 1 && right_eye_fp.defined === 1){ left_eye_3d_pos[0] = left_eye_fp.getPos(0); //x left_eye_3d_pos[1] = left_eye_fp.getPos(1); //y left_eye_3d_pos[2] = left_eye_fp.getPos(2); //z right_eye_3d_pos[0] = right_eye_fp.getPos(0); //x right_eye_3d_pos[1] = right_eye_fp.getPos(1); //y right_eye_3d_pos[2] = right_eye_fp.getPos(2); //z } left_eye_fp.delete(); right_eye_fp.delete();
- See:
Returns:
- Type
- Float32Array
-
getShapeUnits() → {Float32Array}
-
List of current values for facial Shape Units, one value for each shape unit.
This variable is set only while tracker is tracking (TRACK_STAT_OK) or if the detector has detected a face and only if au_fitting_model parameter in the configuration file is set (see VisageTracker Configuration Manual for details).
Shape units can be described as static parameters of the face that are specific for each individual (e.g. shape of the nose).
The shape units used by the tracker and detector are defined in the 3D face model file, specified by the au_fitting_model in the configuration file (see the VisageTracker Configuration Manual for details).
- See:
Returns:
- Type
- Float32Array
-
getActionUnitsUsed() → {Int32Array}
-
Used facial Action Units.
This variable is set only while tracker is tracking (TRACK_STAT_OK) and only if au_fitting_model parameter in the configuration file is set (see VisageTracker Configuration Manual for details). Detector leaves this variable undefined.
List of values, one for each action unit, to determine if specific action unit is actually used in the current tracker configuration. Values are as follows: 1 if action unit is used, 0 if action unit is not used.
Returns:
- Type
- Int32Array
-
getActionUnits() → {Float32Array}
-
List of current values for facial action units, one value for each action unit.
This variable is set only while tracker is tracking (TRACK_STAT_OK) and only if au_fitting_model parameter in the configuration file is set (see VisageTracker Configuration Manual for details). Detector leaves this variable undefined.
The action units used by the tracker are defined in the 3D face model file (currently candide3.wfm; tracker can be configured to use another file; see the VisageTracker Configuration Manual for details). Furthermore, the tracker configuration file defines the names of action units and these names can be accessed through actionUnitNames. Please refer to section 2.3 Action Units in the VisageTracker Configuration Manual for the full list of action units for each tracker configuration.
Returns:
- Type
- Float32Array
-
getActionUnitsNames() → {VectorString}
-
List of facial Action Units names.
This variable is set only while tracker is tracking (TRACK_STAT_OK) and only if au_fitting_model parameter in the configuration file is set (see VisageTracker Configuration Manual for details). Face detector leaves this variable undefined.
NOTE: After the end of use, obtained list needs to be deleted to release the allocated memory. Example:for (var i = 0; i < faceData.actionUnitCount; ++i ) { var names = faceData.getActionUnitsNames(); console.log(names.get(i)); names.delete(); }
Returns:
- Type
- VectorString
-
getFeaturePoints3D() → {FDP}
-
Facial feature points (global 3D coordinates).
This variable is set only while tracker is tracking (TRACK_STAT_OK) or if the detector has detected a face.
The coordinate system is such that when looking towards the camera, the direction of x is to the left, y iz up, and z points towards the viewer. The global origin (0,0,0) is placed at the camera, see illustration.
If the value set for the camera focal length in the tracker/detector configuration file corresponds to the real camera used, the returned coordinates shall be in meters; otherwise the scale is not known, but the relative values are still correct (i.e. moving towards the camera results in smaller values of z coordinate).
The feature points are identified according to the MPEG-4 standard (with extension for additional points), so each feature point is identified by its group and index. For example, the tip of the chin belongs to group 2 and its index is 1, so this point is identified as point 2.1. The identification of all feature points is illustrated in the following image:
Certain feature points, like the ones on the tongue and teeth, can not be reliably detected so they are not returned and their coordinates are always set to zero. These points are: 6.1, 6.2, 6.3, 6.4, 9.8, 9.9, 9.10, 9.11, 11.4, 11.5, 11.6.
Several other points are estimated, rather than accurately detected, due to their specific locations. These points are: 2.10, 2.11, 2.12, 2.13, 2.14, 5.1, 5.2, 5.3, 5.4, 7.1, 9.1, 9.2, 9.6, 9.7, 9.12, 9.13, 9.14, 11.1, 11.2, 11.3, 12.1.
Ears' points - group 10 (points 10.1 - 10.24) can be either set to zero, accurately detected or estimated:- zero:
- refine_ears configuration parameter turned off, 3D model with ears vertices and points mapping file for group 10 NOT provided
- refine_ears configuration parameter turned on, 3D model with ears vertices and points mapping file for group 10 NOT provided
- detected:
- refine_ears configuration parameter turned on, 3D model with ears vertices and points mapping file for group 10 provided
- estimated:
- refine_ears configuration parameter turned off, 3D model with ears vertices and points mapping file for group 10 provided
Face contour - group 13 and group 15. Face contour is available in two versions: the visible contour (points 13.1 - 13.17) and the physical contour (points 15.1 - 15.17). For more details regarding face contour please refer to the documentation of FDP class.
Nose contour - group 14, points: 14.21, 14.22, 14.23, 14.24, 14.25.
The resulting feature point coordinates are returned in form of an FDP object. This is a container class used for storage of MPEG-4 feature points. It provides functions to access each feature point by its group and index and to read its coordinates.
Returns:
- Type
- FDP
- zero:
-
getFeaturePoints3DRelative() → {FDP}
-
Facial feature points (3D coordinates relative to the origin, right eye).
This variable is set only while tracker is tracking (TRACK_STAT_OK) or if the detector has detected a face.
The coordinates are in the local coordinate system of the face, with the origin (0,0,0) placed at the center between the eyes. The x-axis points laterally towards the side of the face, y-xis points up and z-axis points into the eye - see illustration below.
The feature points are identified according to the MPEG-4 standard (with extension for additional points), so each feature point is identified by its group and index. For example, the tip of the chin belongs to group 2 and its index is 1, so this point is identified as point 2.1. The identification of all feature points is illustrated in the following image:
Certain feature points, like the ones on the tongue and teeth, can not be reliably detected so they are not returned and their coordinates are always set to zero. These points are: 6.1, 6.2, 6.3, 6.4, 9.8, 9.9, 9.10, 9.11, 11.4, 11.5, 11.6.
Several other points are estimated, rather than accurately detected, due to their specific locations. These points are: 2.10, 2.11, 2.12, 2.13, 2.14, 5.1, 5.2, 5.3, 5.4, 7.1, 9.1, 9.2, 9.6, 9.7, 9.12, 9.13, 9.14, 11.1, 11.2, 11.3, 12.1.
Ears' points - group 10 (points 10.1 - 10.24) can be either set to zero, accurately detected or estimated:- zero:
- refine_ears configuration parameter turned off, 3D model with ears vertices and points mapping file for group 10 NOT provided
- refine_ears configuration parameter turned on, 3D model with ears vertices and points mapping file for group 10 NOT provided
- detected:
- refine_ears configuration parameter turned on, 3D model with ears vertices and points mapping file for group 10 provided
- estimated:
- refine_ears configuration parameter turned off, 3D model with ears vertices and points mapping file for group 10 provided
Face contour - group 13 and group 15. Face contour is available in two versions: the visible contour (points 13.1 - 13.17) and the physical contour (points 15.1 - 15.17). For more details regarding face contour please refer to the documentation of FDP class.
Nose contour - group 14, points: 14.21, 14.22, 14.23, 14.24, 14.25.
The resulting feature point coordinates are returned in form of an FDP object. This is a container class used for storage of MPEG-4 feature points. It provides functions to access each feature point by its group and index and to read its coordinates.
Returns:
- Type
- FDP
- zero:
-
getFeaturePoints2D() → {FDP}
-
Facial feature points (2D coordinates).
This variable is set only while tracker is tracking (TRACK_STAT_OK) or if the detector has detected a face.
The 2D feature point coordinates are normalised to image size so that the lower left corner of the image has coordinates 0,0 and upper right corner 1,1.
The feature points are identified according to the MPEG-4 standard, so each feature point is identified by its group and index. For example, the tip of the chin belongs to group 2 and its index is 1, so this point is identified as point 2.1. The identification of all feature points is illustrated in the following image:
Certain feature points, like the ones on the tongue and teeth, can not be reliably detected so they are not returned and their coordinates are always set to zero. These points are: 6.1, 6.2, 6.3, 6.4, 9.8, 9.9, 9.10, 9.11, 11.4, 11.5, 11.6.
Several other points are estimated, rather than accurately detected, due to their specific locations. These points are: 2.10, 2.11, 2.12, 2.13, 2.14, 5.1, 5.2, 5.3, 5.4, 7.1, 9.1, 9.2, 9.6, 9.7, 9.12, 9.13, 9.14, 11.1, 11.2, 11.3, 12.1.
Ears' points - group 10 (points 10.1 - 10.24) can be either set to zero, accurately detected or estimated:- zero:
- refine_ears configuration parameter turned off, 3D model with ears vertices and points mapping file for group 10 NOT provided
- refine_ears configuration parameter turned on, 3D model with ears vertices and points mapping file for group 10 NOT provided
- detected:
- refine_ears configuration parameter turned on, 3D model with ears vertices and points mapping file for group 10 provided
- estimated:
- refine_ears configuration parameter turned off, 3D model with ears vertices and points mapping file for group 10 provided
Face contour - group 13 and group 15. Face contour is available in two versions: the visible contour (points 13.1 - 13.17) and the physical contour (points 15.1 - 15.17). For more details regarding face contour please refer to the documentation of FDP class.
Nose contour - group 14, points: 14.21, 14.22, 14.23, 14.24, 14.25.
The resulting feature point coordinates are returned in form of an FDP object. This is a container class used for storage of MPEG-4 feature points. It provides functions to access each feature point by its group and index and to read its coordinates. Note that FDP stores 3D points and in the case of 2D feature points only the x and y coordinates of each point are used.
Returns:
- Type
- FDP
- zero:
-
getFaceModelVertices() → {Float32Array}
-
List of vertex coordinates of the 3D face model.
This variable is set only while tracker is tracking (TRACK_STAT_OK) or if the detector has detected a face, providing that the mesh_fitting_model parameter is set in the configuration file (see VisageTracker Configuration Manual for details).
The format of the list is x, y, z coordinate for each vertex.
The coordinates are in the local coordinate system of the face, with the origin (0,0,0) placed at the centre point between the eyes. The x-axis points laterally towards the side of the face, y-axis points up and z-axis points into the eye - see illustration below.
To transform the coordinates into the coordinate system of the camera, use faceTranslation and faceRotation.
If the value set for the camera focal length in the tracker configuration file corresponds to the real camera used, the scale of the coordinates shall be in meters; otherwise the scale is not known.
Returns:
- Type
- Float32Array
-
getFaceModelVerticesProjected() → {Float32Array}
-
List of projected (image space) vertex coordinates of the 3D face model.
This variable is set only while tracker is tracking (TRACK_STAT_OK) or if the detector has detected a face, providing that the mesh_fitting_model parameter is set in the configuration file (see VisageTracker Configuration Manual for details).
The format of the list is x, y coordinate for each vertex. The 2D coordinates are normalised to image size so that the lower left corner of the image has coordinates 0,0 and upper right corner 1,1.
Returns:
- Type
- Float32Array
-
getFaceModelTriangles() → {Int32Array}
-
Triangles list for the 3D face model.
This variable is set only while tracker is tracking (TRACK_STAT_OK) or if the detector has detected a face, providing that the mesh_fitting_model parameter is set in the configuration file (see VisageTracker Configuration Manual for details).
Each triangle is described by three indices into the list of vertices (counter-clockwise convention is used for normals direction).
Returns:
- Type
- Int32Array
-
getFaceModelTextureCoords() → {Float32Array}
-
Texture coordinates for the 3D face model.
This variable is set only while tracker is tracking (TRACK_STAT_OK) or if the detector has detected a face, providing that the mesh_fitting_model parameter is set in the configuration file (see VisageTracker Configuration Manual for details).
A pair of u, v coordinates for each vertex. When FaceData is obtained from the tracker, the texture image is the current video frame. When FaceData is obtained from detector, the texture image is the input image of the detector.
- See:
Returns:
- Type
- Float32Array
-
getFaceModelTextureCoordsStatic() → {Float32Array}
-
Static texture coordinates of the mesh from the configuration file parameter mesh_fitting_model. They can be used to apply textures based on the texture template for the unwrapped mesh to the face model. The texture template for these coordinates is provided in jk_300_textureTemplate.png
This variable is set only while tracker is tracking (TRACK_STAT_OK) or if the detector has detected a face, providing that the mesh_fitting_model parameter is set in the configuration file (see VisageTracker Configuration Manual for details).
A pair of u, v coordinates for each vertex.
- See:
Returns:
- Type
- Float32Array
-
getEyeClosure() → {Float32Array}
-
Discrete eye closure value.
This variable is set only while tracker is tracking (TRACK_STAT_OK) or if the detector has detected a face.
Index 0 represents closure of left eye. Index 1 represents closure of right eye. Value of 1 represents open eye. Value of 0 represents closed eye.Returns:
- Type
- Float32Array
-
serializeJson() → {String}
-
Converts FaceData object into a string, formatted as a JSON data. This method allows to serialize multiple FaceData in one string array (an example can be found in www/Samples/ShowcaseDemo/ShowcaseDemo.html).
NOTE: Typically used for transferring FaceData information from main thread to web worker and vice versa. Sample usage - sending data from UI thread:// ... create a face data array ... // ... initialize and call VisageTracker/VisageFeaturesDetector on the image ... // var faceDataJson = ""; // for(var i = 0; i < numOfFaces; ++i) { faceDataJson += faceDataArray.get(i).serializeJson() + "||"; }
Returns:
- Type
- String
-
deserializeJson(FaceDataJson) → {boolean}
-
Reads JSON formatted string containing FaceData members' values (obtained by serializeJson() method) and populates a FaceData object.
NOTE: Typically used when transferring FaceData information from main thread to web worker and vice versa.
Sample usage - receiving data in web worker:var faceDataStringArray = msg.data.inFaceData.split("||", maxFacesDetector); for(var i = 0; i < numOfFaces; i++) { m_faceDataArray.get(i).deserializeJson(faceDataStringArray[i]); }
Parameters:
Name Type Description FaceDataJson
string FaceData in form of JSON string Returns:
true on success, false on failure.- Type
- boolean
-
serializeAnalysis() → {Float32Array}
-
Converts data from FaceData object required for face analysis (age, gender and emotions estimations and face recognition) into a TypedArray.
NOTE: Typically used for transferring FaceData information from main thread to web worker and vice versa. To increase postMessage() performance obtained buffer should be copied from native to JavaScript memory.
Sample usage - sending data from UI thread:// ... create a face data array ... // ... initialize and call VisageTracker/VisageFeaturesDetector on the image ... // //serialize FaceData object to TypedArray(Float32Array) var faceDataBuffer = TfaceDataArray.get(0).serializeAnalysis(); //copy buffer from native memory to javascript memory to increase postMessage() performance faceDataBufferJS = new Float32Array(faceDataBuffer);
Returns:
- Type
- Float32Array
-
deserializeAnalysis(Buffer)
-
Converts TypedArray containing FaceData members' values (obtained by serializeAnalysis() method) into FaceData object where only members related to face analysis (age, gender and emotions estimations and face recognition) will be assigned values, while the other members will not be updated.
NOTE: Typically used for transferring FaceData information from main thread to web worker and vice versa. Sample usage - receiving data in web worker://obtain FaceData in ArrayBuffer faceDataBuffer = msg.data.inFaceData; //create TypedArray object from ArrayBuffer faceDataBufferFloatArray = new Float32Array(faceDataBuffer); m_faceDataArray.get(0).deserializeAnalysis(faceDataBufferFloatArray);
Parameters:
Name Type Description Buffer
Float32Array FaceData in form of Float32Array TypedArray. -
serializeBuffer() → {Float32Array}
-
Converts FaceData object into a TypedArray.
NOTE: Typically used for transferring FaceData information from main thread to web worker and vice versa. To increase postMessage() performance obtained buffer should be copied from native to JavaScript memory. Sample usage - sending data from UI thread:// ... create a face data array ... // ... initialize and call VisageTracker/VisageFeaturesDetector on the image ... // //serialize FaceData object to TypedArray(Float32Array) var faceDataBuffer = TfaceDataArray.get(0).serializeBuffer(); //copy buffer from native memory to javascript memory to increase postMessage() performance faceDataBufferJS = new Float32Array(faceDataBuffer);
Returns:
- Type
- Float32Array
-
deserializeBuffer(Buffer)
-
Converts TypedArray containing FaceData members' values (obtained by serializeBuffer() method) into FaceData object.
NOTE: Typically used for transferring FaceData information from main thread to web worker and vice versa. Sample usage - receiving data in web worker://obtain FaceData in ArrayBuffer faceDataBuffer = msg.data.inFaceData; //create TypedArray object from ArrayBuffer faceDataBufferFloatArray = new Float32Array(faceDataBuffer); m_faceDataArray.get(0).deserializeBuffer(faceDataBufferFloatArray);
Parameters:
Name Type Description Buffer
Float32Array FaceData in form of Float32Array TypedArray.