Class: VisageGazeTracker

VisageGazeTracker

new VisageGazeTracker(configurationName)

VisageGazeTracker is an upgrade of Tracker which extends Tracker functionality adding screen space gaze tracking on top of facial features/head tracking.

Note: After the end of use VisageGazeTracker object needs to be deleted to release the allocated memory. Example:

<script>
m_Tracker = new VisageModule.VisageGazeTracker("Facial Features Tracker.cfg");
...
m_Tracker.delete();
</script>


For information about using facial/head tracking refer to VisageTracker.

Screen Space Gaze Tracking

Screen space gaze tracking feature estimates gaze position (the location on the screen where the user is looking) in normalized screen coordinates. Screen space gaze tracking works in two phases: calibration and estimation.

In the calibration phase, the system is calibrated for gaze estimation by passing the calibration data to the tracker. Calibration data consists of series of points displayed on screen. The user looks at the calibration point. During the calibration phase tracker collects calibration points and matching tracking data for each point. After all calibration points have been passed to the tracker, the tracker performs calibration of gaze tracking system and switches to estimation phase.

In the estimation phase the tracker estimates gaze location in screen space coordinates and returns the screen space gaze position in ScreenSpaceGazeData.x and ScreenSpaceGazeData.y members of the FaceData.gazeData object for each frame.

Screen space gaze tracking in HTML5 works only in online mode (some other versions of visage|SDK also include offline mode). Online mode is used when tracking in real time from camera. It is initialized by calling initOnlineGazeCalibration() method. This method prepares the tracker for real time gaze tracking calibration. Each calibration point in normalized screen coordinates is passed to the tracker by calling addGazeCalibrationPoint(). It is expected that the point is displayed on the screen before calling the method and that the user looks at calibration points during the calibration. Application is responsible for reading or generating the calibration data, displaying it on screen and synchronization with the tracker. It is required to manually notify the tracker that calibration is finished (once all calibration points are used) by calling finalizeOnlineGazeCalibration() method. Once this method is called the tracker performs calibration of screen space gaze tracking system using provided calibration data and tracking data collected during the calibration process.

After the system is calibrated, the estimation phase starts. Estimations are returned as part of FaceData objects returned by calling track() method. Specifically location of screen space gaze point is returned in gazeData.x and gazeData.y while the state of estimator is returned in gazeData.inState.

Gaze tracking quality Gaze tracking quality is available on both frame and session level. Gaze quality is returned as a value from 0 to 1, where 0 is the worst and 1 is the best quality.

Gaze quality is returned as a part of FaceData object. The frame level quality is returned as quality parameter in gazeData.quality object. The session level quality is returned as gazeQuality parameter in FaceData object. All frames passed to the tracker are considered part of a session, meaning that the quality is recalculated for each new frame processed by the tracker.

Note: VisageTracker' track function tracks multiple faces and facial features however VisageGazeTracker estimates gaze for one face only (parameter maxFaces is not used).

Parameters:
Name Type Description
configurationName string the name of the tracker configuration file (.cfg; default configuration files are provided in lib folder; for further details see VisageTracker Configuration Manual).
See:

Extends

Methods

initOnlineGazeCalibration()

Initializes online screen space gaze tracking. Online mode is used when tracking from camera.

This method starts the calibration phase of screen space gaze tracking. In the calibration phase the application displays the calibration data on the screen and passes it to the tracker usingaddGazeCalibrationPoint(). Application is responsible for finishing the calibration phase by callingfinalizeOnlineGazeCalibration().
See:

addGazeCalibrationPoint(x, y)

Passes a calibration point to the tracker in online screen space gaze tracking mode.

This method is used in online gaze tracking mode to pass the position of the currently displayed calibration point to the tracker. This method should be called once for each calibration point, after the calibration point is displayed on the screen. Position of the calibration point is in normalized screen coordinates. The origin of the coordinate system is in the upper left corner of the screen; the lower right corner has coordinates (1, 1).

NOTE: Application is responsible for synchronization between the frequency of passing calibration points to the tracker and the frequency at which the tracker processes video frames. If calibration points are passed faster than the tracker works, it may happen that two (or more) calibration points are passed while the tracker is processing a single video frame. In such case, if the difference in speed is large enough, it is possible that the tracking data for the processed frame does not match to the calibration point. This reduces the quality of calibration and, consequently, estimation.

Parameters:
Name Type Description
x float x coordinate of the calibration point in normalized screen coordinates
y float y coordinate of the calibration point in normalized screen coordinates
See:

finalizeOnlineGazeCalibration()

Finalizes online screen space gaze tracking calibration.

This method should be called after all calibration data is displayed and passed to the tracker. After this method is called the tracker performs calibration of gaze tracking system using the provided calibration data and the tracking data collected during the calibration phase.

After the calibration is finished, screen space gaze position is obtained as a part of FaceData object passed to the track() function

The FaceData object contains gaze position stored in gazeData.x and gazeData.y.
See:

track(frameWidth, frameHeight, p_imageData, faceDataArray, format, origin, widthStep, timeStamp, maxFaces) → {Int32Array}

Performs face tracking in the given image and returns tracking results and status. This function should be called repeatedly on a series of images in order to perform continuous tracking.

If the tracker needs to be initialized, this will be done automatically before tracking is performed on the given image. Initialization means loading the tracker configuration file, required data files and allocating various data buffers to the given image size. This operation may take several seconds. This happens in the following cases:

  • In the first frame (first call to VisageTracker.track() function).

  • When frameWidth or frameHeight are changed, i.e. when they are different from the ones used in the last call to VisageTracker.track() function.

  • If setTrackerConfigurationFile() function was called after the last call to {@link VisageTracker#track|VisageTracker.track()} function.

  • When maxFaces is changed, i.e. when it its different from the one used in the last call to track() function.


Sample usage:

var m_Tracker,
    faceData,
    faceDataArray,
    frameWidth,
    frameHeight;

function initialize(){
    //Initialize licensing with the obtained license key file
    //It is imperative that initializeLicenseManager method is called before the constructor is called in order for licensing to work
    VisageModule.initializeLicenseManager("xxx-xxx-xxx-xxx-xxx-xxx-xxx-xxx-xxx-xxx-xxx.vlc");
    //Instantiate the tracker object
    m_Tracker = new VisageModule.VisageTracker("../../lib/Facial Features Tracker.cfg");
    //Instantiate the face data object
    faceDataArray = new VisageModule.FaceDataVector();
    faceData = new VisageModule.FaceData();
    faceDataArray.push_back(faceData);
    
    frameWidth = canvas.width;
    frameHeight = canvas.height;

    //Allocate memory for image data
    ppixels = VisageModule._malloc(mWidth*mHeight*4);
    //Create a view to the memory
    pixels = new Uint8ClampedArray(VisageModule.HEAPU8.buffer, ppixels, mWidth*mHeight*4);
    
}
function onEveryFrame(){
    //Obtain the image pixel data
    var imageData = canvas.getContext('2d').getImageData(0,0, mWidth, mHeight).data;
    //...Fill pixels with image data
    //Call the tracking method of the tracker object with 4 parameters: image width, image height, image pixel data and face data object instance
    var trackerStatus = [];
    trackerStatus = m_Tracker.track(
          frameWidth, frameHeight, ppixels, faceDataArray,
          VisageModule.VisageTrackerImageFormat.VISAGE_FRAMEGRABBER_FMT_RGBA.value,
          VisageModule.VisageTrackerOrigin.VISAGE_FRAMEGRABBER_ORIGIN_TL.value
          );
    //Based on the tracker return value do some action with the return values located in face data object instance
    if (trackerStatus.get(0) === VisageModule.VisageTrackerStatus.TRACK_STAT_OK.value){
         drawSomething(faceDataArray.get(0));
    }    
}



The tracker results are returned in faceDataArray.
Parameters:
Name Type Argument Default Description
frameWidth number Width of the frame.
frameHeight number Height of the frame.
p_imageData number Pointer to image pixel data, size of the array must correspond to frameWidth and frameHeight.
faceDataArray FaceDataVector Array of FaceData objects that will receive the tracking results. The size of the faceDataArray is equal to maxFaces parameter.
format number <optional>
VisageModule.VISAGE_FRAMEGRABBER_FMT_RGB Format of input images passed in p_imageData. It can not change during tracking. Format can be one of the following:
- VisageModule.VISAGE_FRAMEGRABBER_FMT_RGB: each pixel of the image is represented by three bytes representing red, green and blue channels, respectively.
- VisageModule.VISAGE_FRAMEGRABBER_FMT_BGR: each pixel of the image is represented by three bytes representing blue, green and red channels, respectively.
- VisageModule.VISAGE_FRAMEGRABBER_FMT_RGBA: each pixel of the image is represented by four bytes representing red, green, blue and alpha (ignored) channels, respectively.
- VisageModule.VISAGE_FRAMEGRABBER_FMT_BGRA: each pixel of the image is represented by four bytes representing blue, green, red and alpha (ignored) channels, respectively.
- VisageModule.VISAGE_FRAMEGRABBER_FMT_LUMINANCE: each pixel of the image is represented by one byte representing the luminance (gray level) of the image.
origin number <optional>
VisageModule.VISAGE_FRAMEGRABBER_ORIGIN_TL No longer used, therefore, passed value will not have an effect on this function. However, the parameter is left to avoid API changes.
widthStep number <optional>
0 Width of the image data buffer, in bytes.
timeStamp number <optional>
-1 The timestamp of the the input frame in milliseconds. The passed value will be returned with the tracking data for that frame (FaceData.timeStamp). Alternatively, the value of -1 can be passed, in which case the tracker will return time, in milliseconds, measured from the moment when tracking started.
maxFaces number <optional>
1 Maximum number of faces that will be tracked. Increasing this parameter will reduce tracking speed.
Inherited From:
Returns:
array of tracking statuses for each of the tracked faces - see FaceData for more details
Type
Int32Array

setTrackerConfiguration(trackerConfigFile, au_fitting_disabled, mesh_fitting_disabled)

Sets configuration file name.

The tracker configuration file name and other configuration parameters are set and will be used for the next tracking session (i.e. when track()) is called). Default configuration files (.cfg) are provided in the www/lib folder. Please refer to the VisageTracker Configuration Manual for further details on using the configuration files and all configurable options.
Parameters:
Name Type Argument Default Description
trackerConfigFile string Name of the tracker configuration file.
au_fitting_disabled boolean <optional>
false Disables the use of the 3D model used to estimate action units (au_fitting_model configuration parameter).
mesh_fitting_disabled boolean <optional>
false Disables the use of the fine 3D mesh (mesh_fitting_model configuration parameter).

Inherited From:

setConfiguration(configuration)

Sets tracking configuration.

The tracker configuration object is set and will be used for the next tracking session (i.e. when track()) is called).
Parameters:
Name Type Description
configuration VisageConfiguration configuration object obtained by calling getTrackerConfiguration() function.

Inherited From:

getConfiguration() → {VisageConfiguration}

Returns tracking configuration.

Inherited From:
Returns:
- VisageConfiguration object with the values currently used by tracker.

Type
VisageConfiguration

setIPD(IPD)

Sets the inter pupillary distance.

Inter pupillary distance (IPD) is used by the tracker to estimate the distance of the face from the camera. By default, IPD is set to 0.065 (65 millimetres) which is considered average. If the actual IPD of the tracked person is known, this function can be used to set the IPD. As a result, the calculated distance from the camera will be accurate (as long as the camera focal length is also set correctly). This is important for applications that require accurate distance. For example, in Augmented Reality applications objects such as virtual eyeglasses can be rendered at appropriate distance and will thus appear in the image with real-life scale.

Parameters:
Name Type Description
IPD number The inter pupillary distance (IPD) in meters.

Inherited From:
See:

getIPD() → {number}

Returns the current inter pupillary distance (IPD) setting.

IPD setting is used by the tracker to estimate the distance of the face from the camera. See setIPD() for further details.
Inherited From:
See:
Returns:
current setting of inter pupillary distance (IPD) in meters.

Type
number

reset()

Reset tracking.

Resets the tracker. Tracker will reinitialise with the next call of track() function.

Inherited From:

stop()

Inherited From:
Deprecated:
  • Stops the tracking.