new VisageTracker(configurationName)
VisageTracker is a face tracker capable of tracking the head pose, facial features and gaze for multiple faces in video coming from a video file, camera or other sources.
Frames (images) need to be passed sequentially to the track() method, which immediately returns results for the given frame.
The tracker offers the following outputs, available through FaceData:
Note: After the end of use VisageTracker object needs to be deleted to release the allocated memory. Example:
VisageTracker requires data files, configuration files and license key file to be preloaded to virtual file system. Data and configuration files can be found in the www/lib folder.
Additional data is provided for ears tracking feature within visageEarsRefinerData.data along with visageEarsRefinerData.js loader script.
Changing the location of data files
By default, loader scripts expect the .data files to be in the same location as the application's main html file, while visageSDK.wasm is expected to be in the same location as visageSDK.js library file. However, location of the .data and .wasm files can be changed.
The code example below shows how to implement locateFile function and how to set it as an attribute to the VisageModule object.
The order in which the VisageModule is declared and library and data scripts are included is important.
Sample usage - changing data files location and script including order:
The tracker is fully configurable through comprehensive tracker configuration files provided in visage|SDK and VisageConfiguration class allowing to customize the tracker in terms of performance, quality and other options. The configuration files are intended to be used for tracker initialization while the VisageConfiguration class allows the specific configuration parameter to change in runtime.
visage|SDK contains optimal configurations for common uses such as head tracking, facial features tracking and ear tracking.
The VisageTracker Configuration Manual (later in text referred to as VTCM) provides the list of available configurations and full detail on all available configuration options.
Specific configuration parameters are used to enable features such as:
For the list of model's vertices and triangles see chapter 2.3.2.1 The jk_300_wEars of VTCM.
A set of three configuration parameters is used to configure ear tracking:
External data file needed for using ear tracking feature is bundled in:
refine_landmarks configuration parameter is used to configure the tracking accuracy. See refine_landmarks in chapter 2.1. Configuration parameters of VTCM.
See smoothing_factors in chapter 2.1. Configuration parameters of VTCM.
Frames (images) need to be passed sequentially to the track() method, which immediately returns results for the given frame.
The tracker offers the following outputs, available through FaceData:
- 3D head pose
- facial expression
- gaze information
- eye closure
- iris radius
- facial feature points
- full 3D face model, textured
Note: After the end of use VisageTracker object needs to be deleted to release the allocated memory. Example:
<script>
m_Tracker = new VisageModule.VisageTracker("../../lib/Facial Features Tracker.cfg");
...
m_Tracker.delete();
</script>
Dependencies
VisageTracker requires data files, configuration files and license key file to be preloaded to virtual file system. Data and configuration files can be found in the www/lib folder.
Data files
Main tracking data is bundled in the visageSDK.data file and loaded using the visageSDK.js script.Additional data is provided for ears tracking feature within visageEarsRefinerData.data along with visageEarsRefinerData.js loader script.
Changing the location of data files
By default, loader scripts expect the .data files to be in the same location as the application's main html file, while visageSDK.wasm is expected to be in the same location as visageSDK.js library file. However, location of the .data and .wasm files can be changed.
The code example below shows how to implement locateFile function and how to set it as an attribute to the VisageModule object.
Configuration and license key files
Configuration files and the license key files are preloaded using VisageModule's API function assigned to the preRun attribute:
VisageModule.FS_createPreloadedFile(parent, name, url, canRead, canWrite)
where parent and name are the path on the virtual file system and the name of the file, respectively.
visage|SDK initialization order
The order in which the VisageModule is declared and library and data scripts are included is important.
- First, VisageModule object is declared
- including preloading of the configuration files, license files and possibly, changing the location of data files
- then visageSDK.js library script is included and
- last, external data loader script is included
Sample usage - changing data files location and script including order:
<script>
licenseName = "lic_web.vlc"
licenseURL = "lic_web.vlc"
var locateFile = function(dataFileName) {var relativePath = "../../lib/" + dataFileName; return relativePath};
VisageModule = {
locateFile: locateFile,
preRun: [function() {
VisageModule.FS_createPreloadedFile('/', 'Head Tracker.cfg', "../../lib/Head Tracker.cfg", true, false);
VisageModule.FS_createPreloadedFile('/', 'NeuralNet.cfg', "../../lib/NeuralNet.cfg", true, false);
VisageModule.FS_createPreloadedFile('/', licenseName, licenseURL, true, false, function(){ }, function(){ alert("Loading License Failed!") });
}],
onRuntimeInitialized: onModuleInitialized
}
</script>
<script src="../../lib/visageSDK.js"> </script>
Configuring VisageTracker
The tracker is fully configurable through comprehensive tracker configuration files provided in visage|SDK and VisageConfiguration class allowing to customize the tracker in terms of performance, quality and other options. The configuration files are intended to be used for tracker initialization while the VisageConfiguration class allows the specific configuration parameter to change in runtime.
visage|SDK contains optimal configurations for common uses such as head tracking, facial features tracking and ear tracking.
The VisageTracker Configuration Manual (later in text referred to as VTCM) provides the list of available configurations and full detail on all available configuration options.
Specific configuration parameters are used to enable features such as:
- ear tracking
- pupil refinement
- landmarks refinement
- smoothing filter
Ear tracking
Ear tracking includes tracking of additional 24 points (12 points per ear). Detailed illustration of the points' location can be found in the description of the featurePoints2D member. Ears' feature points are part of the group 10 (10.1 - 10.24). Tracking the ears' points require the 3D model with defined ears vertices, as well as corresponding points mapping file that includes definition for group 10. visage|SDK containes examples of such model files within visageEarsRefinerData.data: vft/fm/jk_300_wEars.wfm and vft/fm/jk_300_wEars.fdp.For the list of model's vertices and triangles see chapter 2.3.2.1 The jk_300_wEars of VTCM.
A set of three configuration parameters is used to configure ear tracking:
- refine_ears
- mesh_fitting_model and mesh_fitting_fdp if fine 3D mesh is enabled, otherwise pose_fitting_model and pose_fitting_fdp
- smoothing_factors 'ears' group (smoothing_factors[7])
External data file needed for using ear tracking feature is bundled in:
- visageEarsRefinerData.data (along with loding script visageEarsRefinerData.js )
Pupil refinement
Pupil refinement improves the precision of pupil center-point detection and provides iris radius information. See process_eyes in chapter 2.1. Configuration parameters of VTCM.Landmarks refinement
Landmarks refinement improves tracking accuracy and robustness and minimizes tracking jitter at the cost of reduced tracking speed (performance).refine_landmarks configuration parameter is used to configure the tracking accuracy. See refine_landmarks in chapter 2.1. Configuration parameters of VTCM.
Smoothing filter
The tracker can apply a smoothing filter to tracking results to reduce the inevitable tracking noise. Smoothing factors are adjusted separately for different parts of the face. The smoothing settings in the supplied tracker configurations are adjusted conservatively to achieve optimal balance between smoothing and delay in tracking response for a general use case.See smoothing_factors in chapter 2.1. Configuration parameters of VTCM.
Parameters:
Name | Type | Description |
---|---|---|
configurationName |
string | the name of the tracker configuration file (.cfg; default configuration files are provided in lib folder;
for further details see VTCM).
|
Methods
-
track(frameWidth, frameHeight, p_imageData, faceDataArray, format, origin, widthStep, timeStamp, maxFaces) → {Int32Array}
-
Performs face tracking in the given image and returns tracking results and status. This function should be called repeatedly on a series of images in order to perform continuous tracking.
If the tracker needs to be initialized, this will be done automatically before tracking is performed on the given image. Initialization means loading the tracker configuration file, required data files and allocating various data buffers to the given image size. This operation may take several seconds. This happens in the following cases:
- In the first frame (first call to VisageTracker.track() function).
- When frameWidth or frameHeight are changed, i.e. when they are different from the ones used in the last call
to VisageTracker.track() function.
- If setTrackerConfigurationFile() function was called after the last call
to {@link VisageTracker#track|VisageTracker.track()} function.
- When maxFaces is changed, i.e. when it its different from the one used in the last call to track() function.
Sample usage:
var m_Tracker, faceData, faceDataArray, frameWidth, frameHeight; function initialize(){ //Initialize licensing with the obtained license key file //It is imperative that initializeLicenseManager method is called before the constructor is called in order for licensing to work VisageModule.initializeLicenseManager("xxx-xxx-xxx-xxx-xxx-xxx-xxx-xxx-xxx-xxx-xxx.vlc"); //Instantiate the tracker object m_Tracker = new VisageModule.VisageTracker("../../lib/Facial Features Tracker.cfg"); //Instantiate the face data object faceDataArray = new VisageModule.FaceDataVector(); faceData = new VisageModule.FaceData(); faceDataArray.push_back(faceData); frameWidth = canvas.width; frameHeight = canvas.height; //Allocate memory for image data ppixels = VisageModule._malloc(mWidth*mHeight*4); //Create a view to the memory pixels = new Uint8ClampedArray(VisageModule.HEAPU8.buffer, ppixels, mWidth*mHeight*4); } function onEveryFrame(){ //Obtain the image pixel data var imageData = canvas.getContext('2d').getImageData(0,0, mWidth, mHeight).data; //...Fill pixels with image data //Call the tracking method of the tracker object with 4 parameters: image width, image height, image pixel data and face data object instance var trackerStatus = []; trackerStatus = m_Tracker.track( frameWidth, frameHeight, ppixels, faceDataArray, VisageModule.VisageTrackerImageFormat.VISAGE_FRAMEGRABBER_FMT_RGBA.value, VisageModule.VisageTrackerOrigin.VISAGE_FRAMEGRABBER_ORIGIN_TL.value ); //Based on the tracker return value do some action with the return values located in face data object instance if (trackerStatus.get(0) === VisageModule.VisageTrackerStatus.TRACK_STAT_OK.value){ drawSomething(faceDataArray.get(0)); } }
The tracker results are returned in faceDataArray.
Parameters:
Name Type Argument Default Description frameWidth
number Width of the frame. frameHeight
number Height of the frame. p_imageData
number Pointer to image pixel data, size of the array must correspond to frameWidth and frameHeight. faceDataArray
FaceDataVector Array of FaceData objects that will receive the tracking results. The size of the faceDataArray is equal to maxFaces parameter. format
number <optional>
VisageModule.VISAGE_FRAMEGRABBER_FMT_RGB Format of input images passed in p_imageData. It can not change during tracking. Format can be one of the following:
- VisageModule.VISAGE_FRAMEGRABBER_FMT_RGB: each pixel of the image is represented by three bytes representing red, green and blue channels, respectively.
- VisageModule.VISAGE_FRAMEGRABBER_FMT_BGR: each pixel of the image is represented by three bytes representing blue, green and red channels, respectively.
- VisageModule.VISAGE_FRAMEGRABBER_FMT_RGBA: each pixel of the image is represented by four bytes representing red, green, blue and alpha (ignored) channels, respectively.
- VisageModule.VISAGE_FRAMEGRABBER_FMT_BGRA: each pixel of the image is represented by four bytes representing blue, green, red and alpha (ignored) channels, respectively.
- VisageModule.VISAGE_FRAMEGRABBER_FMT_LUMINANCE: each pixel of the image is represented by one byte representing the luminance (gray level) of the image.origin
number <optional>
VisageModule.VISAGE_FRAMEGRABBER_ORIGIN_TL No longer used, therefore, passed value will not have an effect on this function. However, the parameter is left to avoid API changes. widthStep
number <optional>
0 Width of the image data buffer, in bytes. timeStamp
number <optional>
-1 The timestamp of the the input frame in milliseconds. The passed value will be returned with the tracking data for that frame (FaceData.timeStamp). Alternatively, the value of -1 can be passed, in which case the tracker will return time, in milliseconds, measured from the moment when tracking started. maxFaces
number <optional>
1 Maximum number of faces that will be tracked. Increasing this parameter will reduce tracking speed. Returns:
array of tracking statuses for each of the tracked faces - see FaceData for more details- Type
- Int32Array
- In the first frame (first call to VisageTracker.track() function).
-
setTrackerConfiguration(trackerConfigFile, au_fitting_disabled, mesh_fitting_disabled)
-
Sets configuration file name.
The tracker configuration file name and other configuration parameters are set and will be used for the next tracking session (i.e. when track()) is called). Default configuration files (.cfg) are provided in the www/lib folder. Please refer to the VisageTracker Configuration Manual for further details on using the configuration files and all configurable options.Parameters:
Name Type Argument Default Description trackerConfigFile
string Name of the tracker configuration file. au_fitting_disabled
boolean <optional>
false Disables the use of the 3D model used to estimate action units (au_fitting_model configuration parameter). mesh_fitting_disabled
boolean <optional>
false Disables the use of the fine 3D mesh (mesh_fitting_model configuration parameter). -
setConfiguration(configuration)
-
Sets tracking configuration.
The tracker configuration object is set and will be used for the next tracking session (i.e. when track()) is called).Parameters:
Name Type Description configuration
VisageConfiguration configuration object obtained by calling getTrackerConfiguration() function. -
getConfiguration() → {VisageConfiguration}
-
Returns tracking configuration.
Returns:
- VisageConfiguration object with the values currently used by tracker.
- Type
- VisageConfiguration
-
setIPD(IPD)
-
Sets the inter pupillary distance.
Inter pupillary distance (IPD) is used by the tracker to estimate the distance of the face from the camera. By default, IPD is set to 0.065 (65 millimetres) which is considered average. If the actual IPD of the tracked person is known, this function can be used to set the IPD. As a result, the calculated distance from the camera will be accurate (as long as the camera focal length is also set correctly). This is important for applications that require accurate distance. For example, in Augmented Reality applications objects such as virtual eyeglasses can be rendered at appropriate distance and will thus appear in the image with real-life scale.
Parameters:
Name Type Description IPD
number The inter pupillary distance (IPD) in meters. - See:
-
getIPD() → {number}
-
Returns the current inter pupillary distance (IPD) setting.
IPD setting is used by the tracker to estimate the distance of the face from the camera. See setIPD() for further details.- See:
Returns:
current setting of inter pupillary distance (IPD) in meters.
- Type
- number
-
reset()
-
Reset tracking.
Resets the tracker. Tracker will reinitialise with the next call of track() function.
-
stop()
-
- Deprecated:
- Stops the tracking.
- Stops the tracking.